var/home/core/zuul-output/0000755000175000017500000000000015137765125014541 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137770303015477 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000133236415137770135020273 0ustar corecore]ikubelet.log][oH~_!e_2E2UtDA(%E9$%ˎu6vĶd}rUW1ȶ_9+%e2Z:_^5/k/F73QUe&x_>ukx9nJ^g|ί_,+Qye.JlZ%P2GK^fBTHx-_F,e5*xW\j6D=ZK>b:\fB$r.}'U޾GS!F˺.WۇKu:yu]t%s QZI{x4WhJ19|TGVjg:Ҧl?X M Z2Uj^/ 2RI#>7uע"VYsYdBV*̦ebaziŲ Z%Yrv:c/Yr&*DR0rәuy4/jt;s[syuB7 G_T_FG?vrUB~ga4Y 1*gx~.OyEۧ "8>F_FMK@-r<@RܼwqUy&J+Y=]"ъi}PM-&)LT-+ (T;8aJή&$.2 t)U^_L&w$&w0WMT92#=ѕavSD;sկ|<_4NA(JhZP5ODV }&W熆%tH+Z7O[8a|"=XML%#QM>*`,TOO̧30KKP|RWZT ]y=B&tBr$J u`onH b/t_BV*vI,Bf:J&ޫtvõELVנRiʍHe>hDh}&*ݴALٌaC<{~vYˏg<4ruؠ9HC9ﱍ#9B\I[#`O3e7zٺYf!;\VJe !X`S4莆;FC`5)ա)'BԜăҺ=v`ljV\OY.h2 l鵸y.(#ATe}*H!)Oz:,!v[4bR&~Z[F&BOA`):3W*H 9A)/. ?|RlGy x?p:qZQi3oJ[]Mv=- $N ~Q /mo'$IT(Qy o?FGSɶmvS xs({4SMJb儴95p UR}h]: %m|ԕ9l5szu1uF1Q(oid"C3fIDq.0g2##$&.@`NhPmUYB;*D~ϘnLڷ8mt‚wr-!`n杅=)gE!lER n/BXI#rY9Yz:=huHZuo1{3'x%Kb&g|Ư2^c1FuLxKN7mZ<~<)ň+yD%fF;UO6)xa`ٜ򢳹$mBǥNҧ֘_dH@}_Бbv}rn{FW⃨Tm'DD]|t2Sڲ M6 ZMPXM&Q m%c4B"=sUP&K+"yc#Y\HiTek**QsSwB?k@" )KĈgpy 29ɀ|']KDET&"JLЏٶ;\- ʐhO&2JY@%J'ĵ=,"[g4K^z_Tȁ_F}5,Nq mD=VW|.ERg/$\[$nj*7N4]{IĢ}oV]AVIZ\ȱicv<]DՙMWpr4YE݂v:9rWvEc ur4n]˺!Ͽy D;u((s4c>?3wL60'GEc O&;.y1OҨJUX](:os|n)6 #ؿ yy sqq-i:ɑ,в };'0 Y89A>Pb}UuudD!~OY4,+ʨhn[tt! OGc&:Aà:4"m:ZV<޾g_i ejamOX@mﹸh_/hB>v}7U8)jW*r%|/ &bѐ=bˬh)*z{Iࢱye}avJ[hE33bUw*sJ'GW+AT[_%m#o' wlȱ,]]u Ά bqYXTk w~ZH~u4L f|6N?R#PiYSVJlG>h_ PxYE-bΛ>;6{~YNcO&,c 1l(Ao%Q=_6kniͣ oRWo1,u]4Wbu->BcOtOD35Fݱ2D4D]ni)+OtT;/hZZRjj BC0]js9Um5eFb,~䲐8c{]W(B ClU|S1"}"6L٢B;-~ @c KʻorƗĜqBԛ,}F]gV_ptrĨ!~ĠttnSj'vƎqpЌ[a]|SLzNXe"8{o[q^UkIi";i~VbI"')R^]˒܊tDJ*Or*cn`0)UE8`:|G(szrwڬ34X%ǶűXZr7`ĝym6AKm {&i\?a&=ns5qƄPs6;@Ь2X燠YIUm`=q:WCK"71N2l:gc͕LPc`吥ngi8rtx)dJ+VRMUĽBZ@:եv4)|v}؁ %tY-^y5  -3pNe6 PZ#4O"v^ ړK]Eszܮ<.Y6aK ޔoxXq^+\ lV=Xgjj_кxK%Daabag/Ee{I|tg6!{7!?A:;9ڤ(h~,`,89jA6&.@Fa* X4 ]ac#E!Eי.B0Gipju 1(C&Ey]<u^8q-aĵ/X('^`__z3MWd>b9uVuɉet%- WeD:< s ZwSPw :wFFUwG ŝes/mTτG۰$ ^W5;;Q]tŚe^) $L[zn+XE۱>j޿:~X܊ُ[kfX|?<=hƜa\# TA*@B|;$/xɎTf`.}nIxZq E`;$j_ \}\B@$IECkuxFgGBkXrF>VqEp!`d=-,_i"Њ(|Ҁc.a##-lr@ܿB(–hM&ׯqF68 ;uA(m|D)>4h O 0n1Qlp3 WDv8!318~B( adld $`/hG>'52>#ϺLwՑXGn٩Ʌ9r< -)G1 ʏ;Go/ 趑]`lvw'>y*_F '"K&ߚPF]kΑgWX@YQ&.Y!$ƣQϵ`v_E`w(Vy-x,J=Kn7z:66 (T}LCd'{+. WP]]/뺪3K9d~KH ]dHW}]'ޤhOEF)/γBo%<=CL0,)`ow5^0ʰ;(+yEjH"E~A|C:Wr_0ndnNRbxS%8'{[2Kjb;߳d~i}w㩆Y>5Jj8@GRoT 1)]>[gP#sCɲ8ge3ưd Y$kNWr?CAu믚ץ~씩"M]DB;wugI'!e~CDZ.Gs |i{c2ɏx/{+V6ӥKCio"W67W V oYB9-(^M2דme59?&Ü0O}L@"9W0pʏ}!\|u;s(uoN sU2!tFp(B#2O_pԠ4].t>%;tus;[@P}rUdRdҳ1fؠ ݽ21;\ue 3#Y]:9Ti^ ~9pG':i if4 x0k$J*ɇG룪Ϣ/~ꘒiQO!Se[Lgㆌ}\L :zΆ{\g_bs6 h8wrm._`Oօ TX1edzk Z=fN,߬c230kMmj IrV0VFgq Ta={/ylh ѡ<`,هyUQ|䈼]Lѷ 궁5usigWr-A$V-R0[|E.4З/[Y8?OqUIp0ן__U7y"ؘ?-(ׯS6 5=PP.>\Z7TQHG&iFi0 G\N)8y$#q bA r|7e0I |&KR DS( AB!mw,(Mռ>SQ:b~EH} Dg,r䧤Op- g{bCwJ Yġ/~B- gnYBG<e$) oSzN9mf1*n `vd aG84.п=ΒӦ4OɜDfuD2)szN?D*F^P_gvzA7DrOLZ UͤP/4. k*h?h+\ ifyT'VO6$iO_?a0Q!3 Ey:'0Eo!~4 :;"5՞2LJͤH.abBmߓG秧'8d; р:W>=wUK^~NQ{,?=*:ygQ>Nhc]f.'*e%om:y~ㅣa;h! ΙU~N\.&0A:[@Y)Ϯi^'A{Irkc )ОV"\U,ׯ7IrUcöQi_%%U5blG3@`G4>ҙIȅ:Ww+% Vd^zQ)fvZJ_Ȏ;r:D[5=PFu J6 51L7rhfVj>6׉Л>vuكY҅FN0eNy)ޒ!gi1Rb:-Y η(#Y?|YizgIbPuH6tݡ7pUR5bg4O Mצ+DSm>pn؍%/"wނٴn3} ?#O;aO&{ؠkEMgI'@',;zF^2P(,# j1t-DkG|+E.{S_*0#7DwƩ1$w:UF1n)aL(vbMFvElIꦼI PuvQS#,!)=MfF.'ƙZ]A` s&+ mg682x`: څ*Kj7al+S>KVtх;MWu=tuL ~cMS2ߏ::J3.WߑtW'ר!gu7(>۸y?/4^`Ү߰s覉;t(}h{jPy}-ϥĭ,*Us!ڂ#5Ϭtu&P}^e`_Mi>}Ay 75綠&շ,K|ު2 E+m7y9M-0 0O5/lhkNnOgzuN1G&xU'qYgɻ޶$W XufEEb~Z(J)v>՗x#z0@eY.u׻N{;9̎ε2s0ͬo7>wr+^Q4 ;K?g!Z]ׇwd{G\_Xc ;=;e[.|g)mjtsG)9ذ[g{xc{mS@yzs{{~sV:Ai=Dm? ԛP7e{YJy;} [/\m~7Љ.$㻷4O-<ǖmPL b1;bo߷"\yظe/PjA;X˖xټ>vш>ڟ~mj//[װ/=F ϵNR.'P}cvXFW;_eZeFcƫ0V 兑o >_L)ˋ9/.w/'v9i6<}y;a3"syvf 5;>/:-~˙\/F > /-bsvFjۍڼ؝Mn̆k-m9*DѴ)|GvQ6ɕ2)*I ٙb5rve|" v Z;mIzF W %r|j4DMqҗ|. H=!ၼ~t28kJ:*B^GSNg%ew1Lm Zgil~{8v ՘%80"5Y港<Ў%ET~#Cq]Xłd,# wIC81V/FtNxb8ЭJg<63eHn%mX]X s<*20<gKڡHFϺ`RZTE1Y : g\"FD nW\LlxnM8+FK#Vcg!8P4^;ocƧ "ZhĕJiH+a>L xb-N:.l`E<,7rsigcQ$ -(%[ 7RX} bGe-uL7؟NsTUBxlP5(K-P͖zcnwYѳ!s`C^Wl,X$:pԖ"p(bVvJ\onﰫv2-vpJxЀ# .(RR f`ƞE[SBSXE{$:"ł%ek8cH:αfbB02CKZXЯ #EOkSZ1|1x&o& 2E ܔ"{y6֎Fa"` Eܤ4&Bvl9,Z&Yd# r)b`ρiLR*<2)P 1ovrlۋ@jU*s]FYv%Z  ַs=Owѣ85x70ю 6ZKڪc|0Ĥ5E J !BFTX9)ƹ^)!M=cuvP? އ\%øM*yjI}ycs\AJ+,X"(yhL{:b^2 gWNZd"@_SVʨ^*xƃvˀJV)S{f(Wvg#y|ρ|z;}vLs 17LƠ{"sP]x ϋB 'gmQa} _c!GH]R3S #' [ab$r#E|_vq{xj# lZt!FR)mE!JMJʗ["[VL7%p,σK*2b+:ER&"8-̶DXK(|.)Օs$uMr ]cG! FH4&%5ˤ1I-E]xՊɠl3YroIt\ͣ肣;D GJ%O ?/r!,)ƃK̂V[Er"R._I21|N)֡;"g6#uʀU`Ss-QF g"s_%bk.-My. JbU)(TQuJ)'Qt?sr0U٩-;HNBNIxՃ q]G=H[~}wBYA8S <"[_.8&qLc3%)R432I-낚};jvx?wM[>Hzv7PqTܶ!pٵELVrEՕ-2IG~%#p VCscI{y,JyZZ4Q {0 ]Qx{b`xju:^׳;+3?+Ym@PW=s+BUl4ݱdHX{0`atzp(*3#rP$uLKGMMv"{j2 R.S0kKJtJ&8)s&F 1h eWTm5XD"H5`uOB 3V k>dfON:E],!+EL z*,yJaU}i2>uӷEPLmM@L+;մ҅"Iӷ0Jvlkp^Ip3i=X|zt?*[Av!`௫#vP$AZ- .VuVf#KSOviQsj_GR_Q=D/w^+ BqYb.(yzÛew7KC`/T N$xśLz;6|A34\F"!hYeVJIm#$H:.8qwNw{qV߃1gptj*^XYG/%<盫cIӀZSre?m#3GtnH=q^.9+z_:/+7]5oxΓqS{Ug\ ^l`]@4b?c"X"֐"VsٷZ C\+AҀ8>8~ieS.G,83(%YbQx+#)uu1pbՎҋ1q38oZˡX.2rf{e@$˦7#\UJ#ѣťE%64bYG48I.?=_O9Ū[AJGZ:;-jyZAQ\{ކJ\@t@hϋ3~,@ޭRnY}i%sh` m%8%-G>㞂boR 4rs1Bt\ջKbL gk]Ah 2EKN x. / b,|qa8bDԹK::{٦.!T^CBMKvl ehǧMgEp8${&~q&[:?#NmRF:*jldTqE nXry\xjNKǷ]pF{e#]j9b,m~r`&Ycϰ~ CSMfsy82穕xC;n)kOW径xYC]ni' C^yzӊUk im5@tx\UtJs Ic.8f]S!Ki;Fl&& )KDf`І׼:q QO k&#[saE] uP^m.oo7L_Kg%T#g Bۯ(:r#r S;j\.QQN!~Ꝃ%/눝cHg~LQQ X 2 7*%#5IG11.(YFz(QGC"6KOeե0vq&]bƬJVQE;bF_v'[ ?+^k_y\oru܅ b]{LPh(Qep,"EUtsy͔7: _wrYSe\X."Z*$!b :6;Zb9k~{me GzsLo$9=zt< l*%q3wR-mhql}cYH fxX|0`hk-K(JQvĺwB֝Oj'ZnrWcN"-7{rK'|/Ċ|; 3҈I߀3 ]$Iլ*##3kݰac8x:df[dcE 4nP$g䵜 Ge;s !H-4S53VzR4=iͩᶋN Ɋ\M58,ëi/Jjm?niq4 L\HQOѰi.#G}q"N>Ԇ;IHL| -j8V5c|j+ɢڮQ\n~ӠhdsE!K`RDq 0JXRH5^Uv)u񶠹*Hk~mgnVfSXQS)Wo]-w{TYchI]yܦ)s2f BNe—0X%3N](eVq$[_]b4nHfe,wUkV28b'4 cYda72۾X76XTS%<οr<Su o˙[(T2aaP C8f u"486JCET7VN͙-gPLe) F~3~ڗ6~l*#,Ѭb*#5I[7qK Uu5Qr[>3pgJ wneŦQ\4LXW4Q,eV5_a$>K,m& S|1nEHbFԥ`N%SثSh6PuJxԗ:L<3纡$k휀 -1%`AѾ*Hg *٩V ˖$4g1ީF*ǝj{WjV]S C*gX]ZWu-kd"SJej0vCb?(1Z*- OtўY5?#+kq˅Fv.uGTz"FrKxR_v:4ǖώvXJ ;V 2]\)cb! v,F ^U5WAL:vnPUSX >~F5tBa$D{2r`*6arReqMc 6?u#S-_<t|,C{)+ZF,DUɀ+C`uݲQS*SN]?LU?fukRxZNI>ɘ @b U_,0WaF1rJ-V6 85 %ZDʊ٪|H>S\Lia͔1H1y ʀg`Ӫ+ -\nR*5P} WUBctzaZSsQ:0G})̃^bjLMY*Y=Zĩѹgь3\[PȞ[gg3]SL!J; WK`8o4ύ3z{23ǽr{GOOIF@Ϗf8ή\jMƯǓ 1jP=y FYq|KK,N{S+it}G4{~ZXVb47})$ nގ[փ/#sb:z߅Z±,,̮dzNj5X̊!lG6Ձ &vQ=OoPSK*]sбU:>@@)Uֽҳ"UnЋϭ"m-0 q:_ otavAW`\cJUs6>\wG9֛NP3Oy_%eՓ2KJ`s^&ds@:_*GG gtt<'̂Zߘ9K?N&vf/0E6Pɲo xBLfXFkGZ, d~mfo_N@S9=X~=wi2^gɲ3؊C\J +%Sˁ0cbA:yć? }5iл&ё5yCc>Y}Q:@f cW`iYd~1ч^ߋ Y4<ĈJzH(s18,#q>8~t`&Yopr(1m_v )Pϛ]OXGD XYaql)&<ˬ}$>Y}ZIz[vj{:re7uޫĵ|XFF+94/ ͌ '#fIMpzhgaNk\˘+|2FU^j:l'9Vͽ),] ]RZI\xN Y {C;1\U ޜ99B _ |٢JluH֎Elif\͏3@ 64 bi>\&d[(?䉰"uyxW&S3љ1&r<ޞp)Gd2H:zKE*Wp nzo8bc۹nB"4wF v^yWEo/1;5@}xNp1Px{*ѕ sU_mL\0C?9Y/3#9Iz3̓ڹJjȺ:B TD8XtGTuFT505@uT\eEƥ(vnX R]@) 0êCr3ʻ#T[Ԡxl$5JYDLfG-' !!@ݑ S-jK.T.h FJw>oZBq-/ٝ`uHZp;8C萗Y  묄ca,b?hBe<ұŊ\" 7`4@70zj<1=3cu?Fc~G4ȑLLtd iOb}k3 1FzِChX#qQjSnA eƦhoFY4Eƻ#$Os)<dL l\!ffAgdWtpB+#&}7g7ACs 6y3:}N.r]s?OY'QaRr'\ѧ$d ҳh<Nsz66@dfR~dۮ7[\נa1_]>ݝ@;`ý6G]*o zE6c{s8^ 84#|c^w-Zb _C}OvLݛc+ѐr3?*/[]M?=1Z %(FaqŐ76gթluδwUnK)us3'#\]I^t/v$:6Wp*}݊[Ľ о#ʐ/D;$H /ʎeڸE< El5(7˧aE Λc\|Ǜqsا#77bkҍnQ+['u~COʽ wit?35-zYz>7,Q;8b~႒#%YW"g;! |` =6@N&Ż+-IyiJҲ$0EI-FbC%b[0Jb|DصYL)S5u3 >{rܔ}`h _$ sO DBKѮv_M!vmt#'f:@Wwwdǀ33[qAco-'u)kOp8(p۞Y/{+ * 5/8g[6O&Q//x3p@e-D^9R<u((4|w 썤)Kj)Zp*J802 -RʰbQHdK %u݉jI꺓?2 Q2ʔ`! S!$,'A06p :G&L0EAoUբ?20Ǧm2e(Dqc-A<(qsIbXr c* W&)D(L! ܆;cH>ULA_u=X?z:0fJǑ-_9f- X*d x^1!-޿C?tϡ{soGi?`̌"@ }9xX|l_L6U~*A|gᩞ‡(G,Ux8v4o6A7L3XAIGB).+ahx/t0w0+|'ލ FqNMiv;G'=3'RQ|Kd,{9fnS$RKQ3 sAH FF2njNX@BnSA8M)e 7zmu!Ͷ db{Me6v-DtTm@\[oS |]+|S զE6Pu]Jj+Rr+NOcH\|teX_!oǖ5fP뜍g׶pp91CbHqAi٬5^ǟ_n'b7ޗ 8ňrM R^{#4JxH(VKpSRJ v`/caZIJ.81XCTVƚ(D22De),-dlKIƆeCRpEs! p%2>(R`0ЁbjS欌]e=6B\*2eϯPz+ $YrȔKr{SZN )kFm(R#X- o R̴Z.w#O ~([p,l)CNƥq" $N>H58qL[Mz-I#z(@ g1*ф@pk"QB1A0OuQodlأֳa5OQRE(RBRkQ"p :ġuAuYHR> g 06%؂@\^s7y2\UK!EzOf̤KYLI`A9Y`3ߢ"V0KI6lU?۞s^) smq6FE>1+7)]V]nno$H6phI+ߎycixK!Ka۷civ|, oi1Ŷn{DQ弸Y:cWVpK) ;!ɦHMHn4fF`#"ozk-Z5^HByDaBxM HWXg<ĜHm8`1).+*Air|I<|Sd7`w{_|D/̤ӂBr0'-OUfG4P_2T0(`;3)ddnC 2y>LiӨŅ4r;U ˥ItA/ױ*yA\@#cU〷]W͓)&K iq\yj.dpu}q7lD-LNol ]%k9S2?nY2i EF07>+QV䪞ӨyQ+R5MTVOzkW"+iOZOlZy!s9%wMԄB&!՛'HǤ;e즸\$;>~~Dڣr_CLb)k2OlFqPqfbaᄇ+B֒#[Ũ `$6#VPɌH[&%0P >ʵk^ Ås+U>v(:Ato}Uc82Vk[& _0Y1ibd#3][i8[oJʠ)eBrЊ@֥9JCUFvR NFr kFj7_tՒ5DA4ДZnA6eu=a0EAq@j$ C9!-AX oyfҌݦ  PFr%PR5&P POf ѱil`he\Yr|iLhea2T-w,_tK"}BOm}oL$=Q{F{_IeWute8lTwSׇ9<-Ny>t2^|c5P_|z==Z38-g0E:xAAaO ܊[\dN z^WӨ? ħ۸x_ɳb~(Ǵ9o|y=.-j699m 97rQ&wBaz<q6^±fc !oVr6C 9_ctgUQG;FڈweZG+S!;ũlՕ5>pdoKlHw>ly dgm0)^*"@#|F`, ^xRĹ`._;7t8 k);[FNhڰ*_gE Kc;.g13 ޔJ GoNտ;j#B2GDn-W9jAֶ]6;exl>x"B|Dr{9\rH;R-ީUx,^Ѷ1ئ&$=1$ IFyeż!vݜ-* v_ƆH0>) 5P*#˾ K#WQH*gU+ tٴ)/뉩8U5#ߎvqG|{>/tR>-VĞGUۼ H g(hI#ȍվVjz]<j-ԍ\lάr0.)ܔ8?@N^_Br[Wz2Vx~k XїXJ#%"3D #oD;Gw:?{ ğvTxヿ _g|FQ%!~Tw ӝWGo*F?Ľ6n,R_jp­FUr:=^*"_kY,:DNxȄб#5:'40Me?- r >X业Gy ǎAvǾǎҳhkoqނqޫ W#hAvܯ§h9>`0Fqss0q\ξpњ1 `Ck@I&SGCѳw3%& h X٨ r0'0LF@/S`3kVQhy{q#ePZ=o 050FOɁg?bE$%R [-/+ ȝFF4tx٣{+H@$$[;g?_V"AH#d*&U*.)G aV)J՗-”HCrD&!V{ cK^_zyIxI.PLW[M^2)RDDJރ5ٍJڕMis/URmoi{5=S2MCrdj{=@ۛL/MT<ؒDLg)+A0^R d0T[m!^p˔Lˑ3_mog{9 ilA"ol hX,{[K=j{ eJ#S,V{7E2DІrDFOj{ c"CB SPmA^4M ʑCói ;V{W'2T!'Hw|W.G?/zvDMy_?6g>DiӾ.~<_OAW}qx^ vd/ͳuZHM3Q)ڶJ =tއbNAOٯ'F NW®ڦ;۷d]z?:~Mճv#NuthM 14S(Rd~[*/cwzv~l\"??;>mmr_oiSR4_]t'_ݩ9IXO6tE҈C^Lŀ>og=qS .SB/Ώn/iD^+l=/St"!YZe)QcUO)cEhYۃ' ezEk:ֵh|jkmkJ(M-Ybkεw 7V`8#5$։bh%-r-Q ˣ\rA֧gio|:i63gx<)84A 8$!854ƦAQ[`v1~FD Ú]lr!LHq pH4oqh뽟6$nN}kfw񁭱jlW i;ֈrv]/!τVG"l>oORJP+6y!-®q\}j ]}Nu<7zV2-2Dgem3Pc_gsN>tO [A5f T]Vj>H1 謯^IŦF^kIFXc?ljcHXAk{jH `H2pfKҊA֒d)(C۫,d0O0ueS\RȂR.2\'?-d}^ncw2 kxN #H1BG tb'},Ybȋg4YʚP92UʑsvmfϺokǰ#q 0! D VL30uL5l`ͭHD_!0Aph0bAedg?cԨL<ըH# ŀ"¶u0 !6u=$"<.vg6Y h?`lƵu`1HͧٙE̳AADZYYCcbDx`w7y`>H1^Z}*砮`Y {),&H8ĚYYA Vm9&dބ%"~)%,aLe'zt7==׿=NwUsu_CЄML1=G~0^#pعceӞ&@y {Ω*gJPn6H9#RgVZ-@QG-mm b21>u 4 R+9 F-ym :pL,R voc;.YeUz_꺶vI}F,)2:f@) |Ɉh4ChfD1F[_[[]CF](7v6i+$e)\$l 7m񟇩/ۘ-Âk9R R8qk!1BVXi[vw >ٚދngtM Rjl痴)/7yKd UM5ƒi?#O ϟu~W@9 >T'I=9IJk7MzyKq]a;Dx, 9KA9yOtZn|}36{x&8MQ|;2f Z}yqEЋk8H`>H1 cc#b}/nSEhn`X1,+H9| ;;6^ܑ$ƢSG1JgMb" @ AoU}ZHh7v/l! mc q{:Lc¨ Qj.H10<`?`S ku>lQ3}0m9DF8zIa0|߷H'Ƀ\r`El-̔s Z}9*vzvBo*aSV2ǙŰ.X X5_揻=m⳧ hldyH& e)zMn}7⩶eqES!Ȕّ9^E"cփBlb)@}zӃNF{zuzx %?d=<R o|uEmP8vs-\3wMt@΃\r+Th< ĸ*$׶rv@k>H1rdپqOzc׾9} ʾ얭P%^^ g)Ftu 6]-.R}:G/_('j9 RCX7Mf @TZ cMPLDbrj r'ZƇlm(h~u_j $Tl"6Ŋg":vTP2Ku*z]W?aݏଵ /+H1`o,EY+%w^'д hgLYR8vۮ~Lәxk ЎoVņ>#H9PJSI>j.oϳ`|"PàcKZdH074n0 3w5C1֏,)+x[ \c\ɟտLW^O9|Ӗ{ Eg)Fi]MCgap\'>K Zt $#-iR70@e>H9PoߦobӅZJ{-)]S\|/g)FJ-oW˱oojRv)[dh$СA٢{h9o ؿ_k6u@.^޵q#0#U|ج hãFa_G%H=L#QSbcU(7mL mV(}Y ʓ[RƒT'D ?, >KwGx< ˃}ds4jMn3%΂ %T.ӻkuBeO͈Reۙp)q(RVB K.Le T5brjL?-O 08ד]x"2zg1BICvrf@uƅ`in=ڶ+W Joe?qJ#5M5d.ŵZ~? ˰~~" 1>sP:U)b;&AHAᄅ³gtJHsxNShJZT(W(\y^&{*ui"O6rs[|;Y9)T_eQb}l|?\PA:F`( NY*aZ1M%l(ȕnt'UHPy"@=\hI{*ӜrXc3Ɂ·ƣ̄g/R3RFaqDZwFz561(~@3߸'6z<֗6ɥ"IJ=|@j9AY~qH#h(#uBT1qifG 8tKrb{sLR"r9hE3r&)ڜ(@z`BS76ǝ6liLSC M#~ (a=!I_<-G@ڱ ͣ}QIUDA8!Fb4ljIPMRRix:B"_QVFBe>Q((U@Oz'm^QRPr:`%$}hQ5E.Q1ԭt _(͢YoGNݤ=A%AOPyZHV\ـ%W0T_zy4Ip1{lVXv1 ڢah}76C/қJ΢XJUJY+HT 1ushFױgø7"}b [Q:6~3;L"Sqd%qj -yp=>ax,{v?>`%NN[=Rk8H|ܑf 9 8#"Ąd,d8QGe0Bm˩J9{Z7JJ^b8sTx%;*i?D4G>Ĩ@x(G\M1w6|Iyͼۥt":=dF2u duY0fMobCdٜ֙5KUY6K_ & D<1@{<&2HS Lխ?g#T2CZ5VZ.KȺ(iq)bTH}Ɋ怤hS"Zj{cQKb7e'U-rk^-ose޵P-C5"ܴ$׹)gŸc m{ȠAa Zu}OkR~_|V.|g~y[_6?XX7}aM(sePJ:CQlIue9*Viد)H\B 2N$heɜ|_^0&Z]NzB)?v]8uJY;$ꈎHP@5 BF.˙ǰgQ(>[?gL'-ȧ9̋ZNg|=vC9 !S0YdI\Ӛ rY goaM21. T6jqKKf+ fFMu62jfDB)>lʓc ϖcړc-)41iP& ӟLEPv 3`I #F)zxkK-8{wJ?)*PqZZ,*yPEG Z[;9Rhk^ i@cu3N䌓6 Y>@bB!k!a#cѿ7A:@ѪqS$!Zaݴ .vw7Q:ht6$hMk1?_-0Tr)Mm7D>7_sS?IJz}g徛_e0?Odv?Ӟt9 jy;C;QZVBjrpڿ^ ɤ2ˎl7myPc4V8l33ْR|12"%u7t:7}u ^U2G/N[吇F @Q/'L.ۗF0rV_N >2@KɹuF MHD \@@ȤrVD 4MeF  D,0w%a|4Z{&@q~֘A!)vg:nɡ㢟38"*Ѡ]L) fw2G $Xɚ0r. zkI°&q3ސxC¡ g!W{Fpw9~3 )ȟgqT&bEx#Kg1ϨaRVMtJ6*ҵ$JQ)EiVJ05ٹAdJCk/HWEDV xo;WxFuYR@ݥ瞅+<ϛYmpZx-{yz)F5qޏٹӵ( 4͊Sd0M~~R!eByJj FhwO+<9iJp @nd "SKV]=}WO꫺ l[[p,…CAY :ud)Qa8YIYzҷ3nJ/Gn[N'KLBc}sr3] DxC/O󄂶#:%{1=9& !eE(v=8Ev g YmNP 3.sɌP͵PDJiJR]5!LL.qpi*1_m$i~r$~ySKfJȻ8v8HB% *b4H]"X&Q~2\u?@c%: ;hgU>!\hL& \@6 vgB:w-Zτtp 5τt 錊LHP`m*$=yEuy&w R ilI}-Ϯ~@z 2 "Vs@rޒYVoL* p Y\I}MY^'_ܹN)@MYc .ӻ#|AP;Ԃ 0 #z >\zڳ3B0B4#w_Jm k K7D۠ApjyVWя7wpL)bI!Y{KU^eޒharOuZ-)h7Q|^RG"GmY&-jgPL)p{a6[LuDٳbb. x`I/x\/xSTMrCP)%qx$ e Q$MӾhM[ Z0PA[̣Pڄոћ+RwO:wD-Felh6F-ܡśזSyY]9?kviYvdviG(g4eZ밟Z t?oj\Ah],9:ECD?nooBa?u2jݣPcC,h?,olU/h_W`0B갫A$(ՄO}pǃaPIn-[߻-po"6_?Ny`^} ~!q_әͿ ?>oi>GJ*UG5̵XAЪ'@#GPX&o[9dhf#J\I0#8xQGk bjwMʼn# DW^QB2n d]&)1d שV۲v)9R<Q ,۶>{Y%%ب9eo(:m3YOl޳OR`WZֻl_f!ԮPu$q&f6Ad2ATOfB3=chᎨ+Ѱ ʖh֣m4Yr+Jqh=W犁 5c\X9L9PYEwը<&c1a Ra6&ABbTt]!oPrbuq-)GRjQ } Y"("tA.P}Ndh6Adێ-EYEzZ`=HXnFH81UpgEt4\!ԢBBߒʳY]DxKoɠ(5!%C r-ijρ ɵ-3,$z 5t4SLS)yJQ1ȵi7la ƧhIvL~ >s!Ef@ 8@պJ ̣aEEsVg A{ࡵqA1$N ;Bj&JkH]m eJCO!9&I2D+6vC#B;So,;pبyc-ZRApXS $jՓ |:M:Wd,XkU, ^9\ԴQdC|WaTX`jW UiJ%ArWI Vt+ N+|B5nLzOwZ4R:t@o][樥(WA@ ֏.=&ٜEFb5x`K$]K.fOvƪҒvH WP)o(dXuW٤:o7ȋɔ%;Xzh![~ޣك7v>,MSivf/69b \59a`BBΫ ҉QPbR1rڢ 9jA X7vUj9uAк!殛.EA,(X̀TF;ʎE#l'sg'hJš>U:Cy*h*' BPdd %% 1DTk45joByihBf4hVZhP\]G6 ʖ^} NRPOfYa:.`!JrzV (Aň6Wr,"@ϡ!.BA4LAMDv(l`f2tL!:86kcTEA22BKQ/x<Э 3.}E*2mW& L"[pGI؊-`ER3r D*PzǠٌDm-1$|T<]閕,ֶmKNR8;ZߒPqxe;5-̶,K "*BL $,'!s]X;vF 4z:HFL*g驾l&~:}:f/]MT2􌒯z 82-$ 3s7p!4$j U7?TFG(X_xV^cbr5;^BEUSKEJ(E&fq[M 95 6ځ/'&<`d2Y=; d@O@E{D3ШX@AY:f*frX\5C`x{! W5+D\@Oӻ餲\߶]n qtOjUeBT(vȃzp8ZT47tІtη9| dL*dgN *  TPdP#z~4E"/ZP" &_#  (.4֙Jka" l4sj-+{EzWEW˶-gԷ5.g}1ULkki@n numt=6P4 g\lȦ#JڨY ida@v͘cDUCRCe\G*V-v,EtDaq~$Wb^ÕV9U3Y.k5;ٛk#bF>p X> bn{JtHfQ@Bw5@QheJ5&|b^BQ-+{Pn+!gfvKµcZբp4@H% 5@usz}d cw]ŀCхЌ̗.(K.ڜecIxʓBPM!륈dD+K*Dր+h50Q)F]:4KHIY*!,ؠ]'Td+ R$^/ 洯>mͫzpXΉrP3 C~ĂƋzmSjGyoփ `0ީA`!~2n\ޜnbR>_] wdKZ>^^^vn?|_]dܿ~)y8bw:^.ђ_) {+\}XBީ⺜ݣKS5w\2E{Sǒ&%֗Ŝ:8u̵zNY%NߣS'Né3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:/֩#[DI r5jg:7:ֻ1;:#kNNNNNNNNNNNNNNNNNNNNNNNq:Q)wǩlvƩ;u=:uNNNNNNNNNNNNNNNNNNNNNNNy9N/wޢ?{ߟVP9i{}쵟@5f_<8]? .p'\mvFqۛtWޝȺt6xv"`+HnGjkKv,[#` Mwf!7[Qڦ}\}:Tko)`BLLߝuAoݺ05+͇n/qyӶ%>Mz{2%٪q䴴kPM:OI9{{N_;'h_}l^>&KU>o@ *";!7[k{,(_J )d? N.d?lP$)۟mrY/>@$.0){"xGPB1;ņ,,RNgaXrl{`DԵ߭~j`1 Y:xdÏO!^\[i)QQJ)˩´~Vz:9]_+|SyOE?Vg{o4o|y?w_%}pU∟|Iu`;R"x3SVM+2L 2 W1>:yLZ-s_Ap?3Uiڋ'8 g;3ϽRp9vG:ĎEAjWr6zWFY?/{W ";V{Y,yڝȲk@dkv%g ;{gw+i!.sO數`͎ XonemEY=㯭=sX2|A>iETyj6&Ӫmsɹ'/NʂYxZ3W]0}Pµf:jϴ 2}NW^}\RJ]?}`ѿy~T a%Dkfd&oO/vps:vuۍЛη}ڿN{ߞcNJ~Wf<{ƍp=xCE}\XQ#lZGmّ;l++Mi4Cΐ! ֕rKurA+t%'_DH2:{!/xtR! U::OXT5o4E-jMʼn{+HL-֥S%-0j 2-3˴dJXzř+º-d b'vJյ Y'ڣK&-3K!X(%̸- )m H$N}h`H.ѥyR`՞>BFKZK0'?-aD[TqöpÏCT>r g ~=8ø*q L/Ą*L8;`]_{N9qbL5on s!DKeXZ,N:o K׺%jʢ+o7%DtF6T" -E>L殔({ ({u?7*I*~]Ni2kƄTbߒ ;mMgQM5s7à~(;8lU;4&Fac|b@p{Qy Ƴ +@!# RdoskS9M+p6eװ3^]e4./3ƒIff:so]cGxP_IT2dWLKky*(+.{?dү_G^eh~5,j7I_ idUY\ޛ޺Wir zIT M x8j=%i1Ц%C^c"P#q92ו$.&\$XH\ !mڦTJE;J\&aA&i CAHHZ.ine%wKeBuK[\,{|ǹn;M%~I;WbJg g#nERJXӓ[]:( 4ۉ/Gº &;=ʁx[ĕVsW2ŕuy%kfNӻZ@gkyx*ΩC gA65 8.Je17~{K`yO5]y|{?`o22HsK*?}_Z{drxwO2A7⑗1ǣHJɢP{ӵ(&x͑ ]W 靖^Z%AS/}eτQlޛګa2>޽xS132 5\oX^kI6>lsxZcW(S ]<`'ޕ? '|I? "q?d^t,y1⭝ML@d2::A _ٙ]|]VAb|6f:n"t/01lYF6,6}oG3f BB Åxs/(dBUm$RFQ=M bb U!^&wMTXN$(Y][%b+`VSxb)8 V*d*%"թߤic}رm|䉓 t~0}k [>J<;g? >3%UG!ѱ-{,DKV7(TZ6grF#i0<& (V>cppi!&@߱G?Ƥ\H2Nc~Fa%FT @J"e;o|&1тF{5'UBGb!fv睮 _uMu!Ya4 `jGպcj^,uȎY‚Eq e]yOd0H7OIT_Yu{HhZq(A=Pk!)PhFju:GӬI2⍆~E<('aգWwDmE\̜%ςd>槬]9yt%n'ӽ4ş>>Eed> ? 6Ԓy*Tc:$zTPvr?W=Gbg[/~v:Xe~M[[n>I6С &i^F 9-ŸǡMns?Ҡ˃g)wb-Yzʒ S-hգ3\N7I/t1[0ɩo0zr鳵%qq'_Mly epgIƬ8GhOrd۰YH08K/H;3^v^"8Is Ql:_`̧T|~|;L(KqxJ2HuYvE)t^0WM,Px/ZE{ +a(Iד{f_A8&\1;|x|RWa ̖{c^]\@G׳$ 0}K)+4wwf3^cB{`[oĠ-q-9 il'bD+%6 ֱ[YH&k'AoPp34X[bɀ28L4&2kQ8.7OҀdW= AHZI=C JU5. kFla5KԘ{A,Dv^e٨^'K t>efA.vn.#y/ym~9Pw>J {y(vn\H-Ԛ3U(.LiDY^m"Cn_ ;:jP):a pMѬ.q0ǣϺT[A1; =>ċG^O]%Dda;J`6@]jhTH<1b65elTԒuu%8ٝb:8h9FtIaC'ŦŌqi}u˞qlZ͗2=)Sz" D|}`A$jEIP eHv|gjd'Dr_jH,"Ѐa 04\"I"F4Q%%'pJKB{~Y~k;?ޱd׹7Wѵm]l;_6m\وVX\"1E;es n~KoE=^{Ū-z{TY{ _QY`"B|A1~0E -򧺯*Nqg!Ԗ-Fg_AY99Y13;Cu; Cͭ  [IVzT0&֙%طK{rUA0 JHH)RqPFD1vBfps3gY]e ѵo]8;i*I?~wBz :5c>ˡVk?#Î .{N)o/Յ#;NtreQbQ;5t.[Cek8OUbt&ӏv5m-Ya ]^PNykbL,c޶k5~@LyMcg>W8-c|͞wd:X6 Nڭ薭ǥko;Fo\ek9˘6uzm|;zt{i껖5`Z&-{^ e`k\#)ۦ[n_0V((dyz\j?##1 ?YP9\-_-ы^RKQ/ ɽ|\KG5*,ʇé!`gaԛN"^8#42qֆU"  Ro7sֻT{ ݚ杗^v/[c<-s;ά(u#oVċcEıMvb 23Y^ ^+UFZ e׺U+[%lD m( Li(ţtj/f?hJnҮ"I[uc2Z56sfFWs1`2 \/n_M>u+*#_aa OC#t|xmPnnl ~qglدB׽$t >5ڿh^ ɧ9Ri BB pC:0iáhBQ$3V:Hۈq P>_* 2. s }LHwhThA`;,F֋u _<G>Q|U/=W1pIz:(gb';Qqc}k~w*\niƕM۝N5pJ|-eFOu=c/ UL ذ;KeXʛD^KL`)󀇠QH(|a$f.7 h.%ZHrof&\ފNwꭋ6[5 #kp'9QDe)̥z@xW&]hip((R(^a!DX?.:-(%BA9 L1ɐRʧ\HT^'Kbw)].g4wgEݪf8ZmgCiӶ'<(3d,cGϕ9Q`odA-1hμ[iJ uKX Z&B12տ;,̎1;<cnXnc#31#Z^ߘWێJ-ny? 4t5u8y] /qg;')$]vnh.Z!yHf6% }ƛÌ& tqQ<_QxiZw>4&]b{fްaxe1Ay{YΠM=$#]qdv ؎.zUf퀨C;鴯j_۴%guӶs-*Ncޚ'5FlZK楥rzkpy#>{?zR ~Ivv3iƘEO^de+x[)}jTh]b@J=ӈn.]"jG~&| K.Q#τe{4~]kGS[j Ʋa?,jg&h緅fW}+[V@w E`}DT` 1 +Ǚ$1:#+-N`܇npKCREc4"9vGC Huwuąׁcddےs*QY\)G)6)Lf\H~$l `ā?.۫A˛nڟPKJv?|Zl|+:Hp&θyo!6E[Ow_jޛ: )Kv"49wOIP͸Ԍ+ J0eRF(ݓ򗬨d c{e*jDQ'ٛ?rDRxbdӗlV Y}'5/<\{Q QYqX"XOwn ? 7|=[[֚o}὾aզ͙z/7I'j6V-3wqS_~Wv_2V+CL XAh֣ܗL-hOP S%3?E*Z󴸙 ^!;2oy*Z!A{AyVZ.]Qp M 4Kρi??Aϧ D\1H'"O.lٖ]~f]d)mT֜t6Dh)s\*._t7 }w<ˋ?b"tF_AaɅgFSkuR*/iLPkۧ ٢FY_/M]_ji\WYW y2?uI?OBd[DyO V0&˟ͼʋLsvKCɽqݮ 3%fk&l䅤H3\Ҥ5jk-iw'i*)#0p- \+\pUJ.%@E.Qrc.;ރiG>\x^/6A6_/j"$B,M2J9ͭ4aŴj# zxߗx/;@x*( %fF@Jyh9RH{1} Ǒqyp`Rs~lMI"'Y,:ZyoÄMLE>]FCwOz}BBY^D|aH$qYXSJLsV{\1Z9=o(QC!lBW= w 1FSx-}.FqGrPȝ8]0=$ZIy"Kfw stH >| 䆹AL\`,H%b%(l6X4Ǽ|7rL-qC! }ѯ}YMyƫ1}WoeIBwyv:!We9ĩK$gȇ }o->]PT9BG!^DSf#9| OHbу}o^]jY )LxA C41;QC!,|FYZ75l|#1cM{o \ ω,NW'h1Sec UVPXA*C ڞxҌN !i ]@t݈߮B[stpW zJ[[^b. -C+ p391"Z(0ks6 _~a&f:Ƙƹi9Q2)%jA:zM-W>!hKkdIJ.`b(Γ1 "AC>r;loe;Qn1&)yQ"!WōzM`ED>;,Si ]PtS9)+j"(iEQՍ 2 ȺPaCnRL8ČxU:Qɒ@S0" ҇~Պh^(YyOl;|b㱄vfl_iT ­ j}z,ӘeAdNL]xҜRc&G;E>r'?CNhY̱SD1Ž2Ac=>ZȇB.ݤ5R,'6vGX֙.fiG>_Ǭ#089qeǼ2Ox1fPb;j}(s6: WW|C cQB󝏋" ,8 Ds`s{Rs=mJr1rL-qC!7mءsn8ǫ[zHy.p kT,eJ8gı}<@ c_E8+yOMe`ge)مm.͖oMw09Xl>Ť@:nD;D۶h";[GZE##\x5ɠl( tAn;]Pa3|5{;x}2bzg ](A4́PJ!E#Y4|(rZѩ:O/YI{:*P3PRT( N W(gFH9@P݈O<.?.j+d>@)Y/'tgD3kg4;ANtl E>g=ŋ<Aܤ1S8}M5H:5v=gyVJ*',byiG"Jcdyn݇1BD2AEքҚ,QuN5?h}QC!Wx뤨wJAZgpTr)Ѧ!0²w9AEmP#KiөتFPK\h2e> vi7Fc~C6ȇB'v*$Y2Id88ߑ+ӰU" UVʯ3l65YPU"_k".9*/#KDAdˊp=8,- ȇ/ #;A' # $9_D cLY#z:>p\Fl/emD)$c;R,QM*"J9NxP䌠%A%}7WD۶E>b}>&'ϩ.r"'!S89 nA|T}]H#?>czP=P1ҊN*3 &#363/_~yGbdc™c3bFAQ HOMO<(ӑ,T! MΒM GK-st#a<PPz612U>v)t^\ŕƿ/#g&Z||n(% 4tښ1!> /+qRa N9A)zZ<~R?W3I2x%%cDeVJ8jQng qVgtL#S&n₄jGW\Ǫc }Xgc[Ul`nrE͈wR,륯,5a5Ȅ5f~ksbD7Ԃ}$W-Lth ` b#ڙ |UGd4,K1GCG e;E>s&~AM\SgsCT 1Ƅ|xه5@Z֦grKjg3ݔ,`K{XU|n&-vf[I1GClO=;xN?A|E{k籇Sp`5)YCrc>^vc4B" N*n)r/umw1aW#80e-\$z?})ޕTX" 7RҝOU{XEc9@Ht™y7/{e]R㡦.QZN/2"=*8!6J!|2%p<mqE> fM[7,~ PMn>/Vsz xW]ۑq;ICc~>?||k_]laת9 KC9tÌkS`׺1j(βzwa/bs{gQ!8HaĀ 9 xe (L\gtKQx͌3}gQ5bbM{'{\;>|i.૞tnm(mYD fK1c4"hg +UpI2Mc4:7ϲ5tH5H3J3w(⮹;zOMaÓOa\bW/G[dbk&EbqVoS~<9d{1ƒ!Ãe;Ga}ۼ7HAI뛋E#@vY?^Uydz7+ΤCrJ)k# |ˋ\,0 t~Dyv6 (`IAB54KEJWsCOiVW|yW˛̮W,|i^?_M(TTX?~YC/oҟO'q$GWynG {=0a`&.MK*>LU|\ F0d0"YQlp~13ў]K\C͊nO\8./?d##{0"læ3~hLpY\[+E2^E?U AXT@ʍ㭷t@> :JK)2YMB2̗ *,Ak񴉋跑>'=wFmv\i0򜻒^ȓFՔb.oy čY-s ƫ4ɻ,/?8765!] ߜ0G[czlN= =l؞Oc[~*{|ՓKgz79CNCsБAK+Jj|_`zuXa?ƿ73~e*~~1/Y ה~9 #U0wLwwUPMό9E2@ kW餋B{mTa+>HSDuL,1. &7۸{XTo(_l Ay?c#bc R-AXΧ,A@%g%wp,j/K=PW6J)dwBu#[@ Lg+F"/avKٱwlrDV <2wUqq&grBa79n`ܼ7?0͈XC]~׮bt&&،n#0h7FCЫh`0[Ѭfbr> %7D[*%'hOBp!,טͤ7xxqMsΘ@ 6"_S>6Ы {lӸɱlTnS谼K=@|z/]4nQeE_G4zEЫV]4nQU{F`hԏ EbMW10=:tsPH쯧k'Zv =7Y5k 8cI Βk(঒T-6 ByY40A#lh=h]maXVE9H}䡰rsTs]=9e]nii[W]C0 88܄j_}4Ŗ2_ߏoOGOQ|pQסD~&05Cj7i*9m+Z sJ;C0mk$܂>W6ɳ>Kv5zx9[3Ĉ׷MD_oWk`v-7x+9+޹x>(^(`q:WJH>CS"Ep]t( wfPE$ ^&ޢ-!C:&(`19LDŽ?pa\vCи>dbĒ0;~\GOo4|鐣~t۳@'w{f*6nq܈==Т}zcEaݔ +"҃d#KQ\4U2}ABЫ]^`qj,*F(ߺM)wpn[&%МZ)([bM,I,ӅM{bƾu:=C˞a_ZN>*mw0 \“C /fMT%\I1)w4(~0l2wwz1( _&Q:viTK8KQ}Y.v}*ۥi96nط}ahSa2rJC-s]z+4Ҽ/8+͋voL#WMQx nuj}Dۦ##^nZhL=XYrV*2 _HCBPBr qY.G2PމWEG(eyuDE>ʇ39C~1/]TTȧ۰So>z|c'xA% $y2c"\(iDd>rOڈA`:'ӵ##/9\)GSwfT(%U1=xϽ5G2QmyMzt\P\N~c6[QR$AGen :f< 1geILؒE3Rr4훙2[%j}.195m3l=['KSiCKC DB(%wFd)c,52mQr|)ANceaY4U+Lx~ϰZ|]rvs]J $Gssx>ۖ5F1VfUoIk;yZ GLJDI&,X%*YJMLߥ┃7r۪0T1m}_MN7Wb? rذ$OT~) fUo"|_?赸O@"3V֝6IH()lP.A.&*V:G{ufՑܐҖ4{0"PcȜCZ"-1WLJDq['Z5`1V?r|>*3ƄtG=s.o$"(8A8'ǵ.f+cVB.PW1)HP=ylRj`,U92042,8ڒڬ]C$Z;w:'\=d؂=nt'蘒84uw:w@=]_ "7Q}YmU'grI-|zUSmr5'|7<717{fq=x17w ޶%Bd,W8ÍeqtQ\{Y5 x_u%"uB19iRGKJ*ͭƉ {\ x`[_jq(J<- ;_(`5Fx_X?`Iw4|FUi#2bk"L! .A<(Lu#fǨX$G@"@IPs՟%4Q}󡬖CPZHNJ[ zM4(CT4*U {Q:>h:dШM#p4M|DK;i3´Æ^dG ڱ36'[ ~ɨMo(KK6JZNY< yYyy{@BcX01%UUj캪<M]>=׍oi3PYYr v T8ƥ̔tQ98(x?s:&'G>nүO 9+98wRRXo;\T|D\ "It(%vZo"꜎Sb=3`BedS~J+%')bZݘ#F0/{63Ae&Do}b}㨨+ȧQ|0ݦJ ߀t}}s:cW]udewY̗ĚXXP <4fr /˥)K "1 o/bb4 pOQglŀeV以AF%(Иn|`b%%Jld0TDs25D+Us3$8f 'Å"}qTm ,]uhz4h1'.GRpC_ * ńr>\f^n;yJ1L7I>+P+YctJ1E7.9*q+N< >,QRr4ppQݑ;ޗr\/; Fdsֹ' <.w2r6zRe,8)LD*@2``SPW%:қgWJ|"f|P:բ(mVPxyY x?v*Lv)HtKKpܾ4&BU2?")bLSs@D[;;jG X>]s1]!&RG<SVШd8q2(~,TQmmwqh-sav- R.]5.DԤ3f\(RңRnl@osmm|@@ÅzM,% >:,(Lf,_MDCLX ǁiecgFy0~IiK؈qMޓa,hW9Q %S#05Q}y?`u *oawq*qߌSHjʚwrJ}T}f 'wŐ!\&O\%6 wtˈMEKx| :rƩ"^3V;^,]C-TതjJT2E"(`I`%`\0C67 'w/l.ŧ5$P)-4&ᩕN=*iRI[ͳ+-_,,G%VwIݣ^ ƾh81c>lAt)))LuI#|u44BmjVy6{W8nJcq_%,rYd7pE@jkg&~dI~t76إ*X,.97S|wź'0R^ =_jIJ6' K2ɕB3Noȟ;2xwp6I./i(WFJɹ@;ls+E@ؓѷK<)yeQ ro.x;QRg~ݶ=?sre=JZeV*"0)I\'b"7^Ohu5 ]V5Z?͌ } 汕oߚIJJ_B*H +V:)ák\fNlͬf߆)ۅ] yN`)d? +B3C'",L:lp&(lά %ZI)(CiOg!bFC~ZqQσT+32Qۆf79K8QI*1йjjЬY>nk(H;̩k/꼰A빾i&|n3 gP1ga􄙫ZfYFH;0I}HHy#QʯkIJqD/(l3~Ayތ:=3[@b1Z/vo ִy6=7~sSE ^[ڕӢַ/7 %3RT8v{;~Jff2m,[K9s$9CӽW,€U,[xn¬s"@,|,zKQkB8:Q asa؀;{KKy΅Ru939H?hKA"GD E<4C_#P- # cDe / SC҄%pooh34Riy1 a !f,}yIƩΎ#7Ep")N8»Pmabe5؄ԝ L(jBaaB"{ŲC"GD DqH.,ݥˢjYcutӺ?9̿{{F}D.|}/pS2Pѡ^4DIjpBIs([ɵ=b+@f0F5~ʽ|F8c1Oi1:Gd+? $C'f TX>HsaO#?!m3 GYN!RH"%'?/8l UXFtI/% Cod:R_(ϳ8aK$B_,/^eμhQ-E!kiZ6Adȁ'lo(DZA_pCU8,ĉRcOUNmbb3NJp#̊]{Ua8B)`,=O T|yJź)!IN)r['(6 B @Db2tȖzi9rRv!K!d !V?敍 ҇a utg|sxx3O":5ț pW"Q3z^A z@ėvV3!i.nIȢܓ?b좖Zűmۖ AϊA8G q*iT1/jRw>a!];YsrI+ [/zOĮ~zA+UNEU(aa?2>HОBxC8߅?[l_e統Bm Uie{?,_2~Z*߇jZ)|_zX|fa8 (׏,CKGP*曟0`**y,{o\C,6O=a,Gu?k;9_:/8coPv0/I<~}PE}Ӂx+H%c>Va+AE:*)#)k"^]ΰk&N7Ɉ47Niwp/davPj/J>%4-UY(WCQӭg0W岾UAӭ-jO'U]?;wWWOIn,/e]з+b 軱5ZWvE־ ]xq6U5nK#v'IyiLԯ܌/`K۞ɵV䦯Hhw~ˢa]-_|T^EvG헁CmTkUWWWWuA@Fmb_-7.IfʣNUz}B >c ߏ\b;*1B#P-< ^ufP8jpmDF۵=k6a/7YD7bKoS nD8!_KSӭ{oM* U7]z{aSzJWhYxT: EsL9͒kVܕ54ԛsD Ni1˹OR:f0 GP.I QI RwNYb(0#ot#^ݻ8#PG1>Y%I4*PJYZkͤ9bD_5 1Ѧ{a6ѤSBLj2%iCıQ̑>q&ܠ: (OGjT2,HB6ԉZ }z@QP3bЇ$=D[@xp^|nhr?"1ظ\d+NR&pıv~yw:Qͤ۹@|I'~䄌 dYTC8:L9Iܣ+emLy+frXk RC!I JY- JC]+b c։;}yӹSD7ͅ~K]LJ1ˊ V$qr*^^2q| ^?b>$u|W:VU񻻴b& W^Yh'G0jQ@VMMdی<wP@$PR^^sz!D^!jaa$Z2*3Q"Y 'SD&jdOMV}o@0զY#IgYP.!iΜNrA A%]ݨvPE$jğVx']6Hw/ꜗ=`D\jV*>2v znVP|蚂* ī"XY_z+׉[/@suLr\,jfC1;ҥv( |hT[Y8M?yyP"N'O0N b\:kͅƥɊ@X$VWO-Nv,f9:5hU5MCBRD Oo[iJ&zfD"j9ߧ:&@[-[¾bj̽F<ؾL¯snҳT7Rcۤ|o{\F`DLtZ! 1-&hkGZh)q1c?qh8`*w"ndW|4>3E.mgnV|54^ ?ƌ^s}Jy+# N4HGl~= =ٶ!sxQ12Dm\.epjg9_Uz_#>:vy0g`~laoVUYO*o^5tj_|:h|%&F W{)|Zf/|G *)9y̭wyvaO_^rtp?\NNp=wH.b3fA Nz S% _/L l~cfX3KjI!S%3u\|S7CAtTH-Gu:ktFD90NM^SuCSz%Q@4^- VD`möO"г,L?8(v)%EFUϝL3轢X!K5۽ջo*#FT#ҡFJ-zfM;k憡FV>n 1mƔ\k9'shVc|:FW*g6ςdё%#CQ u~Q<"&sL>ᝡ/Z2Zf%p*߲4ʈS@-%"ZPC8מnid1_ݕH%b |˛y1׾?8A:[q"H_!. FELNE-mo( G{0ƎKxE1$DP+c0QEN"T껏)TatBäAWh 1j&# B 8fn>i5TQ.-F -4wMFap.CR 2[[Uj7LQ5ʹ>8`0S)$+XCP`b61rx@LKl|eN@q"yϒS㨥QX ឮ픊O_al~Dt/)RY g%NHu*HA Uy/9kbXTmB J$գ0y3kҟ6YE޽b4H"[P<(FlT+M,> WUrt( ;~l]18\9a`j+X6ֹuUz2pa$l Rxl=x_(Qqݹ-8^una,:c垃(!Q8P8@:'$kn4Z:Wy pBT}]Le}C%(Ա^8O)2޳, ]sU)"@ 4` =FD59>XcE.ii3q NèGF2KQ]nn,ǕRJDX*\aX06FrM( g8JUQ䄃22P$3,c X;ѱ )3T.G>%U-1lyxiWuc&T ZAÈUP24S'OdG>Fś*gCD&]'pm)*UQ鮠纩n ۔&z8i5Nд8/*#ՆQR=N! :a+w˦4.w9Aucu>, 8(SOq .,X.)þPEÛ.T~\-Oc,Er\p^zFg=yK08rTյq[ew( DǏik<"Q>\9qr";hgtmØ{C[n U*V`m5XMfdGFa+ ulBܸ$pJCU?M,v 6~Z`&xzb5b/+^!- 1d Q@CGt9!C1ϹtQt^"+v^Yit"RL~"'ReuZcL\ht 3 ``&8 !8/Pi`@Dܹy%Ye!dK]enqKlY FK8 JUU{lUTaJk[ۯۣD |G1 mbJ(8V+CJ( w}M,QL8O|y!mdy4:hgj9v\!iswF/ cz\U5::DŞE 蕑shǤ 5@ BH*ndJPHGUDy;i܄{:D&¡"w;DSr28j 'R4Ƴ?['H` nhƺw$H) BxteJq% bWP?7Q$Rk2ȞY5 ]!*¸ GW\`T`@_ UUou4 |ny 9j} M J|%dWet ~-H5nh"Gtئ^e1PWP[~nG$JC&'F/]-sKx酞PX6pzUlxMU|gJy5( "e r:96vК 9}!R!N3ujyK|s֣ANז OR>+bp|jIjdY# rnGw2r:!{"ȹT9r4j \(I$lYce!+RWTE,9=2ܻ8[+] D+zb*!":t ,qoy켇'Qj^7h6u  b7pjM:hGtX&ذx!1|/wyyFapȥ<Ɋ3Ѱa ǛsYY;h<~dVaq%g0wy`4 #Mya^(kިYs ;h6FDWv-餞U&\YԆxDҬڍ#(P<1:H?ǭ͌Y[[rZddUu˧7|zO-瘴LRaK7qk?$/,0i}pM`񼾘V [,/*p ̛il(^6o&qMsmqO7U? m'7Ӱ%R!%5J+n3>k{Z7^;JS`HZ[!Z8Bf>w|X^>>ljW9|>)o|ZÖff6qA u3 8-A&wq>itzd䕤f>_d7_xfyen-MdUiM +Mc,$t[Mg텷 ;n3Ynfh~ }lɖ1nly3?ϵEm|wʣb9{jCAz﫡lfysAӏuJn_nϐ~;q;gLV.³b7b Q^O0!xn]xngUٳë,cx7i-og3XH{7@An^Jc?i!M~\ t43=o8J\~z=irmPpIZݵӴ~~[t,ʩq! #A|]8=oyWB0S֣BV/NE2z)`uw=sG[{V^ vx덱赡h9{*c^aA=vC[FhW#xȫuc{c8 Dui}o> 1"XYNn;У5@bnKqL^~7Mls S9ǣbxX`A ;lBT b70 o@H'G)YMl9r %i 4ҕi`]\H|t#2x5#P` i9jÙ8BI@F^I9#B'^߲@pMdHY|oYA7)u)!4ɑSAy>Ӎct#~_H+[;( กrCG? EQ8_J I' Lrs$n7xMА[t`c3(PXb!n`1 WX5!# S"|(vϏ/Q8*G<7g)4G)%v=SZݮY˯7_y}pp՝PH8 P1 *+[6C0Fh-L0(FJ-! Mn P(Nǒ\aVd݈(ȇ`˯i2^K P˯$Er8HE)M}g8-rdD6[{Qzt a=#G :O\ؒ+"XISAbs?=O(N+aJB6LJ9>|VPH|qoIzTУGGn:g3dRtq*8L5y[ATʋrjli|Ƹ|DAM*nWX0 30Mx͖̙i  PAZMc A*3-K-x5B0vNNi5lN+uF i!;tG?l$i %T|L&(4#i+m$+_8 ؏4ZP $WoE  jX~@?Hk"18T1ɩFX %7Ͳj ]j#q#!:3[Ann$XLSL$IqA;k:!;#g'4I)}ث^q,HrS␨$f o$ +蚊!T vӘKweJ ki,- R9Qg)neIN[ uBw\mb|֔x,B&S|_b6Be `w4Ũkl/:>o}-;El<1; Mѣ60F6b1 ɹ*'3tchqʉ+M} >NS.G,X/H؎Odʑxgn9[Y2 dMF>  >GlhE P*rʞ[*J5)A5[k<(Cb|Y*XT\ Zfs('<~P}]Z>>Gv=xLTzl+r>ӻҾFO{cd(TYӧ&VCc3[_נilҒsNj/MK%YȜ3N1%[PDg%L%jA^[a;ӘԤzf}v_} Nb`R`Ӄ W  g +juax4{ ;K7E5h…F8| Z lMdܾi5O J-nWONw8~[OʏDLi|d7R }[@Gs2I)l*L&XݑJ惴X~%e1dts0^wCVMc+!$GƚMLr6Y؄r#$x]2r7$O ONN"t̾>{uG$/A7)PΎu_=2J?wl;ۥ*Be7Efy5^/B5aHpa7N075wzo}ʋ2f|U]y5 z󟯯5m x J5*[ >%{?r@`v^eݢZf/DKKiݐl`՜gy ӓe'9q ',1DYnFI(3j-QuH+JMPobuq߹؎=da40n;n4g䞂z/xŽg'<,gncs:\O:W3։)Ff1t≍`ҡ>>6'Vgt~0=qkMȗ')wLRn~tin2/:aI^$UW{jpV4?XO>>;fT'=DZ;E·0{f}bSҋHf cjǷ}nA`L}d^m(8Ww %'JObI\^ MgW @tp8ݓdѿ_V({A2>GL#UA=UYN{7w9 ZE =GWn%S~IҠO~>k"ݒv$vȣ׋*vxڭAfeTk50VaTySX٢<.DciP\FoDpb_^\C]JNu^~yWB0ۘQWCȩI re6u<#"JF̉(UB;8j+`JJ@ByטNA]ChsP&0A GLxc.e/UmI?Gg?X|޻FupI737Y*.M}Т.ϛ0;n:= +وϚmL<&XytڜcG&(<-ET*x朼 ?ϾCeиtªH*lP> B_"U19@^ћۯJ4_OjRy;]+ }uTF*ԑ-Ϟc^Gn7FwY;UwjKgCzW{A #@E!7 R(_T"$Jĉa9冝r'ΙQ/-?k-t L˩:9AH0k7Ar/[ž$,NNW";K B}{ۇVxNjɑ2urG,ˑiEf8 B $&  +|Ď@N{oEP2Լީ<9Jfsab`6.B K LɜR+*+ॗ >lNe?cQ▸Qõu'w>ϛb.9ShEt.AScl+{%KP*E"MjHq)&6ZF$T$s\Pc%--=@1U q,~*_8AHC8xK9N6a[pFVK_d36b/~|0I2TAM$֚:9%Sk b( pJzI؋} <Ɂ)qƄT1aƱQH#7,mcY!Tb$!"6\,';cQ1JRK%iJRH2D1 Q0NvCrڋ0#BuwH,#}@ov) ]@b;c}fm,/,A¼(在Y@33@Sdswěq-RB@8(S!q*iS֣"kbv3hT !^ ¹FDFe!F(zg Rd}=S 3]8+ֹRÿ!h><%c)$t0ܰ?"ϾiُwL5ky\ƫS;UFg l+ڕƿ.AbEjq75$z2}w.-@4l9ZzyUY̊I7]^X9)T5뵝[(2P"H ,v6:K8rV}Z`N+en9koL'8,I=UH RK>:́&1Yu(QV(WqV |U+w7ԫeM ٧oa@g"$SDv68䆄He O1z%E0@"BRikM k}Ub0*9F[I>hxd,xCÆa.Lњ-w@ֶ辶(Wпtda`2 _ґh?"vt}]KË`>Ds}1lL VF}G_d5A'>hnz\-SA {ݹ"2I]Oޏa 1x[ROCi ?`h7S޼؞wt<*(bKe7>=]w?uQ#FıVSD+T^cxOoo Wj}$K B7> a1鞃,e3?}9mƌZ߼qp=<q8=  5Ƌovx J}}=L/B'L_r Ck+),!~S^|mBcpÍFJf\urP)Av5sw 4F^l=7zv% xP:n:W砎;;>m7j?Oڿj4OۘH C<]K7 {@=[e|T)* /k\>,t+ x}w.z8un3wd'_S\0S4&r^,SYdws۫ͅj֠}͉)bwvyhTbM"J*+`( >j7MY̜<ݦ>lEԌvooǽ3$:i~Is;# 9UTJֈaA$#G1: &O h:g~7qqYn-Q\D,FAǜ,P D"!]SnW  zoȐvuy>%6mU4m[ꡂ5=[F/N=[DL`G9Zf6[О=7IJt~bETXی)UY{{+ϟyl|?H&U^//{ݼJm߻^α1a5 Vǡ8'(F+a+1MeǴ:6Mύy`S` TL$F뎧b(6a3 JWՔ8aϸŒJfdeF3k8ak3Y_ ŗyYRy#F6w[?1`] [ZBiݧtsx!oEDN ?@ " DSJ$sG`TUgĪ1;t'ŎI%O1`ZV-fOVҘwRKEUXv%庪;_%]Qq`am!;-6RU]U[2]̗NU;uwK! No-7 Õx(pƍE  QO Be++kCWa/ ;-*Yv UvO~ IH'}[ va+l;^Y|U-#ػD]qċA-eL <N f15tDN28feTQX8TbJ%خT"Utiϝh]%a8A0.y|LDB0gAVIBJ&0XWBTːRƱ9 $F"uІt<(`ɑP3 ֞E)PT$ёDPsJ#gɨE}B*mbz+х $lvm ={'|>@f^v]k/U0U31^ Q5@բ> 6b.+ۅփ}ث٘kG[YXE{b,Fk,v eW:'Wp^Ot-t s^a M=)X_vqkaTSGJmey]{n"I&#oZL/=諒t_M\~zsm5%[^+ WZ~7|X^Y+65ONWw]{j󆀘 ( b\rz=7]N\6Yx){htQjؿ$‰ن;EA3`QrE#K`룲O{U{e#s鏷A6^\\/W:믍,6Oz &u)(rÇn CHv$..wkk~ Ý} ߌ;pKpA?Wɀ0j+nd?2q'N[PtY7\)_{ |Uz'‡Fo=׻?;|?g@7?̓f}uzt{sqSTxË~r۹A~wϚGw3c"(b7(sǴ" TTFc|;gHdB,k8~{,$RO2dKg)l&w%PxĔPJH *} HESDA0 6BT<}t}xur|l@=ubYfzN)x/Y7KP E5y7G^)r "q q Q+A[ LX̝{~#N@fKXf>|<}zu,{gpi>;hl 'oc'o^,76Xk 1%2qi)֘3B u@$] #rz = HW06jfD1|Fk=fDHV*#tƃ?h An %yh&Epm +Gor7 "!!aHKhw5m_{԰ K۹̥I&I;^h(E KMJ繙zlZ?,Ů>lT`l6ID b ̐{T7eÅwZp\ؾyk__G|;7/~,3_|_\:hysFuyEe/\fA{Od\2̮^&$2+2z4yWW{^{WF~<~:eG eG^W-G-PZIBxc"y/ԑLS\D&Qze<]Z~hzufgKWz2-#y?bntd,Y:OUtk9 j˅6)6{гfLԽ|2v<Vr-c[vI|6(o&GEj]upJK簲y&umS XV:WT=}?|9<^u@4-%$sϋ\ +a(1Ie" #zAa돈*t>G 6YD%1؀]{>*2rVy'z0.mEֺ܊x*Ͻ!¾U4չg3`٬ű0#mN8md\˕o)QrD*:MK|(t/ֻ7.lj@/$JKWc|OYPO5 kxԳ!elj2=NF;GԝTtY~8gs[+80ػZ|GN#hڕ>4I CtffW:=qAq7%w]nEQ֩;uf&"Ca2yW.M|Kb_((\^3%Lfm|{n;ooJ+Ư(e$IHp=mu `c$ϸhs>?5(Om N3D®jjD--LRHDcU$"x0#cyeތ 3WcH7 ʣq2:a؂ q~X :DvWB0B/Cݞ[SPZ6u3SUˊ na?>b# BC6 *C?TTG76: 0G8bLD1 -Ϛa cdX"p iC؟6a@p,x(>[Z asfw\bu6)MDXг--u q,w6|ٺL 桉%h XiI`G )0bۃ3n_$Nʛk״-8ܗaıTX Eƚ E9#kkNӇsWE3, 5HY}c ݱ˘򀶳h^Z-;ԓE5HHL#Rjdu"V27Bk1!v D6&HPf,~xS(k;N"lͻaS}Mqv,=V|̚6:'he |&_1 ;sd@XqQִ18*S+>L 6ZG" w 4XXn1.XAAG PY_X|0ۍ̨q-0E "5AuΓ T݆BRb ]6_mt261TK g.i0EC}M@u\Wd0WC/SŠݹVY -oȗ%s詽bz2U䄝]tgjF}q֍Nuʻq;Ta~Ά-4)T ۫juCD,2D$#R8164t+nQ!-0PSV Dh~Uae4pE6Ӹ TInYvKe.#@ KWGWS+eם4D vbUdmCS'Fo; 2.n♪E+!k4A…)2 R>Or1""(t{ylqKށyG2(ı;QH$}޻>(}z֤D85mt B:4МE!VML;u&1*ġ9G_Fv^zOf JI-p{gK_[`[E&yRy9:b$(P#4%ڱ*d T -+ϳݬzqH\"4:NY 5K_ SQ|rg%GHdƇ; V^dHHK `7Lx; Ŝ@ JYRS&O};izlMTy|rdg VÛD/H'YJAp݇| α_,?ViV|>s?~fg6Fim={̿:I]"ˊz2[g=e_ ¦(}?Qq׌sO"jFD]:)7UvJd١Kq0>p;;Q>zJpA܎iDU+}>Fh+.:*VĶ"E4Oљ?κ3M뼝/ܥOU^HAMa8I y8N͐&}S$:_OlQU;I6sy7 RvyN}97<]~WFeץKvM|ыUϲrY^S|]z99 S97nN-=p'ie˚RKtԹ;Ρ:=gə}cJ4WK śU<JS]icwm05Zl@{HU+\,PCw(]&+s0YN lI~y0ˁK9#LI7;3GY HO‚?hN#3Gr=ns+%`+42[6B=|3m1&w8]<ݤ˲o.1F-mk?p+k 6dUXP!QmaEI!2Tnm\JlէKdr$ Dc_i#Pd;FoYBc@:HH(Hb״18-ĠobP+bW!y%嬸 0핊VN5j]GTSZJZe^<$o;"@îZai!EY4>㼑|PI7[YZ t4$VB2"s@ 9aPeUU,ݹ4m!kyok]e[#l#eyy¤ Ԋ]~Ǹpij՝!\q:aqmۢ綠KUChX %ފ|*%BwE?D4` 33@F _jN z4] [0m[ q^G-״g_ m?t],f bc5=QNd\Ɣ~q \TepmBBXLHm ( ƌ+k-"pTTw< a$=Tv!e6Ѕm>$~CwhӳC5-X9ESpF4Kڴ3oՙ8ix{'+­\FxN rk+B1MBKo'YUd"`M0dNp||tѡҁqDP> bcF|m8XmHY prptZ;,g_^4rG{TaWX}2^|mʺ3=g/gጼ[]y) +Y s$wz^{U(sqӛMZ~ْst2ӎt|Bouߝ܎Q%y I6.D4ld:0hʩn N8,\,X/`z6| ?9{SwWfKsȅ@mJꛫr޼;*L\ Y91S,-h0޺H]kn~PI@Z <}cmN2Rf6h}4z\^byW$m hKPf[D**+P8" Ҳ݃ME$0wtJn|8iC~ؽUm/aL4zfwC7;_{{n]-0_\Se#sE/j5x/O>.Nn _PsmF!x`% "wD ߑ55;1@'}bx4o}~Lہ6#h1̝ܰyﴎ.6~wD Շ//4}?%pʠomb|:6'x~oWIn74~]&< HTYD|}?< 'zC^OSxO7shv9=11 Ɗe/^$}ut!g|\GO]fEAS+ntBBgGBKWa7*KffuK9ReM,,DV5N$=Y'ctgAx[e@/Q-3銔+ ^ͤ*U\E Y,Q8W&^%}d\*9R$gWKt5ouSjQe#YAq{+/rFejJI%zn寐~}|!{*Us0+[>NJ2Jړ|L;gX\eA,=EII>GozU?( O=4a|M4EAtdMwT>zE[YNsE״DU%"#sph+6y"rypۖxz1 Y:ĴaܧYzQDRN:~ % g:46 Dxr.˃ 6sJwK#9(rgCK╜ iQЀ. q`5 ck6n=S=xNKPkP$hW6ۆ*adDIv:dWeWؼ刎JdxvlY"e*&ۖH,&:'CR&H! ('6L,*e9*7KQEVuAU8M$ٔ8qR8Ƃ6dae<-Ɏn=T'x`(c_vlzɠ /$.s#fwg1\/+4^s6 t1Bjx ܧZQދ9X_LbkƧ'ˢ?Zn,)D_ } aXjLXlC3{gf.+CHQJ ]!E3ǓTf{ΙsMptߕsbNXrrYGs{ (o"Q8m১"XK8??:I I= sGvL%nhX ?Jq;g"ޱt;e/tWp_t΀Ǟܯ#wQG|$1+`29G ]Zׄ OwEy(@kr0v7dG]*|=c!4(| +M(] .sùV}@ϥY5pH7= ;6A:ya&?F],|&6Wvj?Vˣߟ,aLkZ 㪎m'$suҟKPR"n†qkfy?aiX^G .-T.Zf-D.bGvя3)xٖUR4WDRL yˊ":ILlh2GS%șgz6uΟSw݇.QvnpOK2sYȒ^6 oĚ>.R",K]D5ݮT}tq><9o]W'g-u~|yqrN2wh67i P//+L γy\&ӽ2ˈ^1ݏ‡a&vSU8ll#9uؙd[d]9r[jOM~gg/nGbO3HCmX a1ԆP6j3|mYG}m|3PH3b UT@n=tiOӔT,S[3.1QaT;QKʡmr~/OKapBئrX9H^eGGj5c;OE5%G,ިWiUt4mw$gROAn$DV!W9_P^ɛK?* ]-JBXkB d^ Gs y!Co@2z-&Ǐ?c+eN{PA]2_,pw}>O$.,p믏?ޕL]#ۼ*X\I5(;g Ię;pȊj/ApK퇠'O^*eϴUd;,-_&+=LlxM0.(.JIc FDN/u [>^ foо~bg+F^d^8xul86%7-Oڹ1h/Ţ0_e'v@]j {cjɜטRÜ@1 ~qtQ NFI5<]4h {>"~61 #f*2vʐpnSݧ'ɝ/H0S,w4ks|XaF"KfWM⩍Me3*_& Ԏ9&W,4@wr@&+ "BH+ASGɰ HX oѫ&>@mmWׇǓ[umf9:3VEF=8i25Snb/dA>.5 j~}P-W>Qi?f蒶>lRG-&pˆPQjf,)|#h̝tCq\X1+,2S0xMдI1$Ml.K.Kp 6dx#[]ht, j, y0Y|offl\)sJy*L)ň}m49;1TgC<ʊ3<t"oՍ4 znyS| [c`P ( 4df[;! Ťgxa4k7z$ENuOk ,ZC`1EqLimvat(wb+^2m:9~ڔm'Q>@H}c$6|,34 ~f5iQfG"p^#b^X@ N>C^=NQ,\ 77{lxiլ:h-_}ZݚdKVbl1&Dy >e^ \L(mUcwRbҒ/9ŇsaE@|53J< d^ 0A~5BK,ҽkÒqZ*3p|KN6 B=fz遆~C=aߞ^>⤠0QgDTM($vב1wdrM"/ɪ0VI sGN+|jT;oΝ7ΛzoNQWԢO5if8>{blz| !m{l؆ni!W5Lձ &q"lWxq#c HWsʹt [λc˜"꼪k1Օ/Q6ծM*iT-* oWN8F)vMvyQYՄ+$j\5-QE_"霩(r T;{~m.΍Aq^1 NzB_v>6rvkE0'"|MҦM SDy-U0mT3/_㓫QD6-5:'W/{z} 0Q:o[u J+&`U: s%$z{JpudtxN}`eMՕSVM(=h\fuE`e7Wϐe-Vբo Bz"rrP ECT} Z:dx^VкAb t҉ tB&,jA9whY)MrX-5!XFX.؎IȤĀ4e #'MӒ KĿ1Xy%(-(p/ygՆQkC峦9U_N ,8Y4pyqsI4`Yߞʴ(6XxA[{8_s{ h%N΀^li6D| fDY9W,֒8c1f44Rhd4&aٜ ."8TJ mbHbi6%YX$h*"."Ye>7g=s}3C/2qh҉!!S9':hM(z޿b&5=DPLqjCDd$FZSba2rV^8gjYdo_5ʣ+F:PI¹d"QXTrLJq%p}k-FV)Fο=HRe_He +jnU`)Ύ#\,c11KԺSa !dm7n]u"DWֶ,xi[1Z [()J@XH9%x8eCQ"Z4NXKSÜ^ޕޞ:+rmn)(θ XS7FpӪwrɔg ׋7\Tl.aˎ ] f0GN_8;x21#=0 >x%JPWF2 q3T PY(tqgZD(9ܜ,}64:y.")2y{;鑠C$qb?$P'M}>%^eow{yxwߍ;` ԣQ!F88zv.qJe= 0zjNR)<~IG. ?! w9uĐ;봻{AO~ش)?wzU e d$F5V_~4Η>=}9\aC 9t7^5 4_1 \HJ֜m#s;׸R#<tn mn5_:>%9Ȋa)嫑_9C9D ׫40S& {b?6Ý}_^CWHH3?T}u #?Z8}շ rۃ}b>pC[~}>iMiYo㌠'^9f08>QY{=M$LF;N(s{Y{i^}|Ɵ]2lOQR!;Du]{MN笖oхso~ۇj5UW;s3$x2 ry.?nP Z•-W˄o.+Fi6|+[^mR*|\%'xt̐\*!e@Rzj5TTF#Gd>.Vwnxd|Sκ́Bwr%_֙[ڑ.>+d U7ЅJVU=̓| EW,Xu}> \@cԜ`ĝF[w`^fv]`y]gzL ɗ.W*}sD]w{:`?coֳ|{2. fѠەxI䦹"onZDe{AmS7e V;RqkNjbU>{6ʉXrNV)4VZ9P^ׄ<o%YUzEPNxgY˫[GZHk3MxI 66 DJm`A*]fcb ZL]D[lcK.1T0bqJ jt| ^`oT "Qƙ \l 2%D8(HQkGd{ p7wzM]wkfL[ ٕQ `ܭ1 L27K#˝|s6,OA{VP'M3ч΁eMX4*r/o`룲axY/\wyN8PrAțÉG=đ|G3\24ġg }>6[wWr4pZy Z,w-F|l뼷(ycwٹsK=ۅkW{M6:fv2{la/=LDI6P:wlu Ytˉ&4%t㤙vyAP"n7VT@ Lk-4ZO 02q2SA4 1 a&c50\rJ_K@Yf.Ð6|1 wve'Ꞇ>|򟯼|Y\ܯ}*|iYOpQoȠy_+ f xm{rӻb?k `v+At0_P^wwHsp9i7S(:yuK84k&+'=,kG׮0?+_/@ ơk-(/ @?[d*2͓L0r"Eyb64 JĆXi!;CP%F N~4?4z8ۦ^zL)UpΝañ+P0gMx9AA4/Gw~y}t9fyH;xϏ~;|5ܻJM6-5VoEbA%A3g۸K6ISC:}О)2-Jr0htV1K5q4,p".352_ L蚓pcu+gy\*b],?p^ ^C¿-,~<{W7"WpufJu?fTjbkqUuٯwq?Q#>L&1!.5ofVIg;|~?j XVX/Lcx;ܑtV0˲82\ 2cDB`NC?gO"LiXI{,&>[0ǭv߰ݝ;TͰE'Pr_\Mtn\"&WŤV6t _tt0njX+:Y 27;]1]x"2?P-gy2V(+ A8[*L1)tP$z] ЕoKk7>?\@\áU/R)7EiW;GA$#5[v<B(!ww 4 gi ;NX)2Fw11[uLi[PXLmZc!5UfgBKy` OERXB*3ʤUl3W[gmvjx 6iiaNcReBNd# KHPfU?ݷ/s CC~o#0K3rn|βW㕞ͻn&#lUCF@[ԬJC<ވ­Cں"VefwWaw*STya^EFX&m9" Yh%WqӸa+RTq3S-#.o^oUcǸ:6M}$y':T>hΖ]]ˠan'!Alܔk>6w;8ώz} >GG32֡ Dfzv.2JK]871>8;sG&L>&w97.XQo}y#m!g_v/eC߮C*Zo}u\puRfn׽o]a/{ע׍d!,@-,>5;Xxbeǘ$6,;8k9D-َaTܖtV:{sN,mwg' zRVR=mRZm4wWkƽ~v٫oTnUG!}MHT(\.9 %pfO3ኋ0` *Jv'Fq38K+o{f6G:{%Me2 yJbZu#ys7벏{AZYJik,93*έR0YUY%<|nn|0{_/5Zi4W[;vݙ"OoƇǥ0lpK `t&moPsR^\!M.^qPZ g A П6rԚKV"KcJ뮦6k.mM9Ol!i)VoΐhQ"ZkRZe%ykWǐT#("XUrAb-# JMT\1;VmE.cC:|ԶEjeL`RZ) 2W"d( W%[u0  էRt;?@5EZnIF$I Zg/@ְs9(2*4G:T K|4h;wX--LFR$ES:EdIz,S&@IMhZ kV)Й~#PBωe,fU2zkwh< 9:(Rh}dL^i,A 5*u-kKc8P][ Ty*\u^ +t ЬB2*刐 XM @jЩ+5AIw3:Je (Bi߬ѡټVp3UJ~hsТ v,ciyy“A+6)B-uaYѓ6"C$BWBW0v&FVnQd*IPt1t:: V뵝_$\c?"Ԡ΁ Z!ܮZ x\\vPi~v)3W&G9X!Bϋ2܉(S.֫5,Ⱦ k:&5YCjܪrhncRXeT ]X<]25|7E$/D lR`x\gt٤L*ת o1 ͬ6V߰GTk|e1d$A K4^ẅ́X0[LW'ihwm VuN :-THx gH'2xY }SBj(~n3jpjD@IGˇMUTjЄ;ql/tsX%^yMcD3n~Cb@W6U=;/>]<,21dW4)8g9=8l$O.{|xK^<="ozHaow}ዿ͗Y\coWZzpv O^E3R9t{x+qxCशWgl 甩CӘasW3uZ'n8Cΐ3d :Cΐ3d :Cΐ3d :Cΐ3d :Cΐ3d :Cΐ3d :Cΐ3d :Cΐ3d :Cΐ3d :Cΐ3d :Cΐ3d :Cΐ gA}N:а}>:0}>:a+L?` 0JK:-kl@=ߞѷsˢv_, ^|axiFCk—mamD>}jCQc[dAB]dd_/`x/`(APb4S㽠"4Cٹ?xDIwjS2}8'yuiZr Rtѽ@yG0M > ͹ *<{4Eޱ"Cf|³ϖ5?.|syz~'ں}K›&oOOYx^ً?*xyl{6 /dq ѿ]2 ޏ?-_xk?ɻi=ç~<|??ģWu盛G~,7}39yw|x= nF͕oߟuO?'KZÃǹ0:dËk6wYW@zT׶)O?PІ=z2k=ۣxb‚lzذw-^m_c1 @8I{9,?8;P.~gG$섲[=ibx$fF >ZK ܰctoM b||p:͍"I%7@#E=귛q$83$D6ctxo7w|N[nV L.fj%Sh6manbm5rĵ03־@4κY1*p A v:ݛ("$.CPfbZX!f!rXQخ9a.xtuy6CC(E3'ؠ Jv܍<Ƀ[Pz Kƴe8yv]ҡ)QP%Aɵ;$%Cg9g+",*-$}9 (vbRA j4cze{b@1]p4؝?&22  s,. !i/y% -(U1.3݇h' ͺ}>P> zo7b(.bґqvͺ6x3"&h|OgE:vOϮI.-ʏq3nv:~׏|Pӡww ޵jG~1ȘQ~^&ȏxau9qWYa0$L]*$ޅm:-~f+v'S|oO~ۿ'0[b 뎘u(hGY N2D8Aj0 &HpL^@Hʉ7RWIZgzڛm;4)ՖZIszo ^8ŎA "y02 Wa%_ 3>{bgY1h-]tjtEAyC5HޝqXh݂E|DFP1nus2y(bUf;\ϥ*bY[ȊjOQ}YL<Oa:g9F)u_ "\!q5u:oE$}Yow`AmDk׏k̈́fE~ֽϛhp~Ү5eßk'U z´{ *0|vЬZ1+RE%8KZYͶ݋S ƭLby㩊Yk-H1UmSA>pt r/EamC]4̓ZBs֬NQ^'td~ZݬT*j2*E0:*PY*AC5p~U"% ѽ$7<ΰ] k\X<y @pA@4xTs9?M`ici\_jODG){0:MWX2eUAXQjpOq4T1 ڱhs[nGE UWa:*b9*"ҽwּ:NqLc=^Pj4/H]<Yz@b |JMV@ua$?Sz63%Hƕ?6V?6V U|,UOlk쓉ߺ;F^*@Pm x=Щ{LDYEgx0,[r" N5CVe&Bp9bXnc^] twٱ=&{a 0;0ꎩSBAT1V+s9ce2Xc1V+s9ce2Xc1V+s9ce2Xc1V+s9ce2Xc1V+s9ce2Xc1V+s9cec $ ԍIjIP:s`&ĤH`MYBdPrLO[\&@@A]p 6`ev@gQYڲʅj!`0ms2 a=m1Ֆh%2JPWswbXv1.mU\,qX+u!asCF`^ڱg}TӋoy(p{ f ۧ<1R M61nd3܊鴝V-6H];%:bKඋ >ǻ)7J]X3מKR5Fw.# 'x:4=\3shbTڕQ j -F{|V?ENs#7H$6.tp1ΉON,. yp3 yuU)f}?'{ ?9/ LfYİ+ 7x~ P5N_h"tvA^7ZH0#m? 0C4wJCAb`ƮDvhJ4 Fٹ`DfY w^r3"EJ 5An#Q8.˛ݣ-̭8On7kd%ߟ7^"Q!'Iχ9%M>p(^/j+` }_d ߼]H_D#Z•H}WZO4XNzi|K[_"yиR{.OK/G̚$u'A_qrf!ߝX||q*tʌycHr?oqMM FnĢ >G ,T4PP`Lfc m.:xIDaشyAޏ|xJjBጕI.㲍 ?5W`t>B B}^n߹`D){V'/&A^ܬ?BAj=3"}q ?|BħxTwv9,&&HJa5&}4r^[B#-B3^7#qŻNlU`%+:VXX,w{9 k4Fe탎1yort>w'ɑ~}ݼ™Sd/S&v!_.7*qvw  8;bbCػc3G\s"NY4s;'DYɮot h씞/{VJM*V-U-E~ 1䊐;(t?LgKߍqIp* M0ۦŹMͥ g,"/5;TYթ㸺2t=ԩhC-Sel:,#I{UwA|T Ǧ:bTeY k_!^ Fv~#MEu*Z@l9P 0L`ba0OZ0dҸ8i=푎D^ L ݾvkk[I͓5['Ug5j{~#Oa򺃃)|&3`cS;t7q%wM&2z#Mp>99"#f A sst7/d-<敷'*N]y:WmsAM?4!cEb+ߙ=\S9D1r36pD()9g#) Uٮfp=s%k1ofp!&fX:s^y8;oL(`t0ÅbS0:`t0'޹Sz@\Og2=߅Ш|)w",.H Xu]kcUFS&]V9.H~:=87džM}:Fn L1E^H4"N]\hcn>' |(J`}™k792 d:lsgҬIe"sb G)PΛ%[le+T'8r{lfLn`?"17 ż$YY NǼHZzf-9e53qyuP1FDRݝa2 fwt>ipWE(r'oC6mŷaonRk|u)Sh<5*ixH]3>U'jc >G]uBMqLKVkJ@U[b~1ws?Gљ]8ig;u6r MQ}_{_nMzRJZKiJ߀&˩ vׇQUuG~\3TtTgнm5hݨv +F4#jpƬwƬ{Y˞gvUWQ7S<<>kR J0ܨ)85~%$uEl'+w'rt&7I-<%?d;b`f(%zF]#.edJ2g/]zǎ ţJzpSj|7zFo'vk .$ _gOT=PWA<㘾\57}p5Kvs} ;]d6v^U}m5y n-"5wF)i5K#I6Ma{L/eg|XxwL^T⛿qt ONmbU ^}Oǹ&ḘV} ?vk[/{M\kPWDeses*&V+:VOhY1I2Pw6 sl\A:ߥG !~<~|I9[]5:Ia%{u.Vj^o҇gץks@Kۡ{r@z ﳳBjfȅ"^o90|MV*'yTb]Wytr/Sj]4p@쾛ޠ5m2E:մkc{~m"5wn@SHwL?osf&A=!BBwCfGp:UGh%kOxVTRP˄d&UV0o%1{x24 #3F.c"8IBv$Q+ۤiЁBǔ+?ŷ ,AHACimlO(Mz>hٓed!ZsFfc @LC-fp:{3qb{gSLs/xG'FGN|0gLǖd9OxpY &i+1F tjΔ6'JO$?Q;z3Wau08<}S3?;IUG}SCľj=ڠbm&UzQUN"\gp3X n3OMl\kvX*x1q2el_}[?/l>e?K/xJc3G~]KgN?U7I;3z25̛VNq]8ṷǓ: n=+ox}6ruswE4!| s'?0>_S"4O[z>Մ4+z];yM`ЋY[L4<&bhί aگ UcJUo6jg6PbQ&E.6׺+Vqt|N1l<_Ƨ40_6!a\Ʈ??~f9qqݞ Bf[J)K .kkXa$dNُ969Y0>So˗ijejI>!KtmW=u'Z^_Q&,#qR a! ILW|3v1e>=ק&KΦLb Ѱ'ƯuL[guuGODAc=aA ´TNZCW]d>xuy\}e@k8tv: G%nW [,M/L"fАW 1@ZܝΆ3ˍ1ͲW(n|ƔMPAZ1υo>X{׸ S!Q5:yDΪyʝC';iW7~WSN| ?w;BxPZO1{NjlYSp`I ؂7W۟{f,7#NkS+!^Qmwj Z5\ó#j|_o KV!{14-`\T8?6+Ewp~uؠsfNmN zv{]JTB|N ˜&YZMA-O{+1Kr熾tLAcu˨JPIjj'a4B~B.8|3Q*G b)hl!rŜpOi#R4j0Tf'-.!9jDJ#>;F2O!R4̔3Q;!f7ƜZ7b+Tjmj7Ȥ!%] 紗 XXln1yuE]\͑eTÌ*;S* g$A BN`IkcNyllp8>º$`:5]Q@0+*k*"N ikώM/2_:rL0φ&;9!AIaP!+uze|%`_"L)qɻ !(ZVXi!'~J!Sb`2X1O(Ji\p`rƦI"E f48*rb J]Yo#G+D?Zy .]c~fI=b$J -*ʊ#/˦R\L02%Kdک`X7j;RΤ`PϲLk)ķle0B2od4@0Y0sn)`-kgapN DKG M~d /De0cDh5q@(az0cA& XnA}~@Ң DM)dkv{NR8qT ӂS , ɲqS gKH &AƑ̡M=`"H>t%60!duBD$+ + 1R) +9z|5bfXXVAHq`|\0ER@Fy ,!$$&#bgd "I0 &+*! !#HDcid``n# Rt0NcV:d p0b 4A`! Ĵ7 h&^VQ+R8zU/E@Ah!ЅDӪo .[tWzO$Gq'TaDX;h9 p`dTAJbg>&K JĬ !wJDK*@\f#%3)Ayl4؎V p,=(0 wم})Lhq Fd\ qG\|_S-\8Xߧ+3?Ӥ.O2򕔳4w%=Jd-ݗn Z+\A{@]mqnV7pm⏟rv}i6,|geBeN>8)6 mvgy 7n?0ve#s%6 bu5nPouI[kBoF $2a'U_&%f~ #i ='D1#¿,u"}N#\x3^?lU\hx7KqE.oTs)<~kuqf jq!9ՙ1&1E-;RJ0L,A $-Z)iV"U_nu-Ӿ8g [gYެXYh9`q2L;_oۈ,=ٶ c!:Z_>3I"k^vzQ9Fm}7obV-}zؘ`_D + w=xa^wC0 -gN7,=o|t|E{FзA0<O}X (5oYf>G!BFq;;e1β~D5 `tfE QZR`V'V_J^e5xx+IisV: 1sEdLx#5heQcU_]XHc?ڪctd\pE5# Q I*02-Qw$*\pRRHI3rNly6 /!!oB`q#SEL-8Rdw"X, 0"byS)GeB˚UWj..#k=#U#e#n6_g,?zO @);˱jY()Eɨ>aAfҎiZJM"w7E-,-bX%{o_ϦmN60fWX}[&_wOñ^d m| MmM:!RZF4e-$4*%ENxLPĸIr'/Ãl7 {rjcb&=ĉ&NĽQ+2$t10 x:ImFڨD"k{4ZW o{6Bn?Ly #<,hGФi4]Bym++xR6zhNjQUP* 7dG7G %=}|(gf;u4]%ly_> ۻDi=ƽe}}[e+d\1_`\;-^ݧAWOJww8'-Q+8˘3rGO5I=4h։wʳ.KTed,&l,ѥ|g y P%`-z`',v1r(ɾ,LpMXdQ`˩,v {ARLڗVcq(lkLbLR C F&y0-.8ߠ B|˷3,K Zp7sDpy|yufub։rs'3g^]Ī'huy1ޫAWzPr桿_e=B%l.F3Czyr,WK`n(uA4>/ChN,Ӓg`fW's$I]uwh6v=Rn]nzkuzptm^zDl&fvq;S6Tp(DF_6 adzF7E&ok`^P假!A|pp}ܑm{ٹ{[;~h<RL#3Gil~*OocNdZr6$)aSJLʥGwynȺ)1=,Si EZrS@]IBY;AF;E4 dا|h:xvy/ݼ-'{\l 6^@͚}L8슉eు<$ΕPԉgPNӜe4e)22F (= k1Z6pia`=WӚvګU5ֻlWͷOyɐD8g}1>2JL߫'/QNq}j2=;./dg=] 5123"cb{e\=)Wnr-=ó7Y*aI7;츿|8>jͭ]ל.iluC|G`jC~n 9|o^Z5T7f4¦8'ƫ*C׀Z@q:"Y*gLT刴urI؄ڠhxJ{)UHBof*Q**J9Mx EقƉ=E}q`Zޮt:j+٢Ul+D1?j܀L쪯qI]Kj7.VO # \z8|#-d~75"AA.Xݼgۇh1B^ k =9B]au[k=A)ȏS,Ŷa>N'\vjWRѣ,:%άBfLr \ȑ 6Eg-AJiHED84Pl4*8A9hS(BN92sFyi!6(B(j,*RWFދ}ĬTݝy i'tD$@'Y Y 97+Љ;rNXAYQSQ 245AD=ȢP`\F @M$!y%L 7Ȣʢ(`z:%c!|,uQx4;%/s&B߫7| Q1&~~r^-VݢMy2=Qmjø Ͽڰ%3T0? IjȼLm^z༚|=. yvA]iTwv#y.VKjL^Mq<92JTOYMOP{uLVo_/Jwc3"9-R@.e֎Ba(kW'H*;TZj-և{z_jy}6\q},؞NpeW&jK^š{'4L߮~ΕEi^'kBUŃÖ0m|?dttp0^#ڗ~M'0~!)mP:TG:m#i0Y(xR/(M' ,st::ߵ8[GtEںVĴuVѰ22WLcQ5Uq>cOY5i՗$Pݩa:H6U#9߼7-^߾y˸}/߾y5H8b?y:u)| pq \?|кqC e\[ƽ>NɬTHaqf4᧗P~Vy7QdE\S_i\PsѬZ&g˳\%rnyđZZߗaS݃bg!h@6Y*+8[GJ?Qχp^!JnE?k.XhG[{HVSwX>^hB2oZ/mYoߢ< Qcqn9!t")D$$-1F V(vwݚRGuϋgPĹ3H5V(.:g&v/K},^;uqCJLR"0"i5"{ߡ>t^͖HdQ|XIxd$3{ߡ$|{ۄ 0B q2ME({|JR'/4JO~''ߌf~*sQ=t772XF9482J "̩ͣ50eߟ &S֛2l. d2*As5λªzd;\4Td:Optn) rf@c>dkLR@m44(hik?%l1 m\s׬zd c"cCf脫ؙ!C͢ 8 $WDjO&Oj#AsE|@L Y" +I;˅*lq}kg@Ho]'^fo]!s2oy*Z!c{A9*r钍{yLymS)K.c /Dv3avȗ_/ǰQ?ORZՇcWخ<P}?.gy˘7&}AKy;ҭo'K.Dsi}m潵-WoB ʀ93J"*# ك†nإP4BCZTD{z\f!EDB6ě92 @azCPl>|*JZ")ρ:XvAlw)YdQBGZ:V!F-JLK?[r˦]tۻ6UX3}>QZI!c 2H<3$9:ÜL~ `!@."˳ܝ@E6k+l0)-;r,$Sse d^Cւ+LxGHB~Hxp4@^2|S V1R}3ISXIK;螐.=:JӐEKdj4nr[Iw顗sIUl&h%# \_ܠ7 zcRdZ*M'?}y}ؗ0Ns"3<$(5E%Kרg(i;</koG2w;()fU.:/8K},7o{c8^nBFzuΑ{RzY<0SW]Ͷ߃5IS[I10RlA2P(SR}˔zq̭1d(컮rDe @XN. P6\XRz)UHB!ji>2t!!e#HsWetwѣ,:%UCȑ 6EgK%T)*WvʚEPRC ȾizoKߖRS,,ꆜIĝa9'k,w;rHU33)^FW^\Ѕ}ޏWNbrW;yֽJJZұo'PHbLrSB&t!*W$PKT}DuFqFg'ۏo{o^^__yO? S<9y!zpq\?_ ]Z]c] ZWdyG׷eZ\Fa}*P^8~;u _Mޟ͉5%*isM(dE\cW_a95aIWGeH>"yG._w%\kGBzxjQqPūbѱ$hy-"gWQo+iѮ8? 9ws}QVe 𗝾ɩ8G9(3)&KuZ L Hu4`)@d3ߵnuđVZaÓnûx4U!ƾ<M*FB8N\#\ \:4ukD@_!J8,gv ~>ժWm]ў6u!Y}DBll]~2xnlm6ƊdAq*X.M jAB`NY096(IGs#;Lz 0߇B 浑kxoԑ?T0߃B -^しWǘ ĔSAkJͼ2OJNCE Qx꒵Py=S_f pD&dATΥFb9{z{h;8Wwa@%/LQohz*j/B@^@3`DƮAXN֊"BVՆsȯ̆gQXe7몊4"# %4"dE$ ئTWA+IYg_[הJI>Hϐm%U1H!y0,r.Mcgqw.W7ifu-QB9!1ysQp|x4^uzQY3_LdzT|4~.1M)`۫:ݦvWcUPϦQYTɿo-SH K'' %rG2 .Ҏ #|(k$=)8ńEP'9pJPy Ѭ:ԷǨ¦Nw q|vFS^[C!71ib"Trab;["gU#qXL=Cz's{' 170`8hf"jNh3!15vݚt J:$m.U&|jI7slWxvx4]@gݫƃ]{7v>Q=E/O0_~ JIh.`堽T a;j+BiU;rɒQE8"ayM "чB-Pp~ՙ;\*SDFq2)`^ Q͗ T ""s"1Z{3Rh9>чB  fcNJ .ZDÉ6UPʋvȿ8p^p& 8$"ѐsKDCUmUn|}%C|Ff&Jr&<ʳl=m̉,BKxK ,uJIiywVDbXYs;T0cV͛2f2Tǥ B0o~}Lt41 u$<%@Mlr>㙺lIQRttb$I3mP"'Kl230on&CU=}=VʯI3MuX,&cW~}__,ݲ>WAewEHPSWBMlp uG䍴QANh3Z7* _'eD2BVrp?քYQ;ݝ,,y{quV-)Ví1V3nl^n̈;UD>57($ɳ@ +Iv@R*,M ]PAW-T!P`Q =˵;m*g=F;}r r_]=X&uà4/y*IvV"T$ZH%~_ 7f(ZD-*=(-"%*keCo^Q8VãHΔYQPJVf>  5"d8z~P<㷆=9M°(PWx29k!GKƧ N9̊X#.;+m /#SR5p"$%8MHbx]WKZIkڽB/ kĖ%XɅ5(2. A[`u"08FŖ%ڟ%0'\ߟa]hgQ S<·귿 _g}`Q)Nߙ<)$x4VZ6׶@UljyŹPcqmzQ(GZT1k5!%cHP@lsQs u>9Ҿi,W#%{J& o` 1Y Sf[.)hE`KL0fgTQ`%:tdL0&45 ag\Xk ޮ={R3#?2`DvWߥL8<"/wǼ~|1/Brk\RH&0*^omuƟ!TݺE7w s=K(Vlj4 Ͱ`AM9b:\yxM糭J?S9MeKǥM"eHd)%%)Fj FӜ6YPn!܄ @$a 6Hbq:h;vo `r8tE:;kFb? d|&r AV M gVay>#&hϨ5d1.$ W M3S-<5&]k!L.ҕB<@xOF_瘟D~)jBF|U(UN_bb(Aނc-=xU\4eBĊ]_0LJxNqItA)R%2zm->$jĜ轱ߺE%MR;yP¢ [x7 8ͥ4 3Ԟ@XrblqexUnjNo1fhVW 0}IT婀rzX+h>gUY%Cb,FbJW9h&/_״\#u-VE\%9Yx]fXth2O}YN_NX¾*B.7`^vy_vyJ:R]{.vcѬ%) O'%b<ҹRy%TKE#uzG9%fr F*Q\RCNZqU0>k kG`J bͮB8,pH=H~Ʃ^ᎶB@I5O|{VkSENm[~*e[ke[!:]oKV!ݭ&;7Zt%3dkw U/_a&N:L8?r}on_XwkC4q+ bY|kn;Qw8<ܽww+찶inO龧d?CYe-ߴ|s|s꣮unMn` 5mɻ4?6VM}aD8͡R˗ 3[lsŲn9t_ݽ2cLqbZzyeŴjj']K]{,mKȸu] 4&;')rJ3NT>EMlII|7d՜A#irsŒ ?yX^2ZCxGm7 Gݧ̺ᑇ]}- TMa;:6 =fL"ǞC3u\bo) uD tҢoE_iSGg5noǼN ^_~"z= gNſs/Kօ| 9!9\4~DB9ShQrLN*Xęգ,G:\Zќ'HZ֭ TZ;  P} ӺP/lA*CE;]{lkYlՌ(]y-fv;55כ 3/(9 X)y1#ܐKIGJ{r>%gMwXCz&W?xx3PʁYZa?~:n+T ̆Rt^M C}l`v@pd)%q,r:aBÝ Vt3]?FMs QKx}:䛹AAEr{폫{g|^jϴg燜}Ԅ;lԪ &[?8pݬMTxlXpD4':[cн'jĉxv5G>3¼Y7gDXsB-x. `p_͊Y$VqЕ=_կ$]y1IZ׷F]0r9C$SL֌\P0a:O痿yrƩVna v),ۿ/_UNJ8F31V8dBHFT=x}7a/mrB1w0,pVERI0GY'jV*'NoJGա1jnߎr%wiw"s3}euVqW&wKE" 6ґdgڙ"։%G p`+p %ܺZt?M̃h=(XXjc&"&Τj-b\)& LhRYNZY5oTc]T` M|PeD4N @] z<=3Zpoܺ)oݝS#ѹ:&КJxV(1FP> B ~+z=C#&9GO`Mױ׭_i˦' db}ty/QI0d0HIkB;_ YW͊ʁV˾! LEH6L (M\e )ଵMy{^^<1i 6J0Ҧ()vHPe%@}i=;\6rZi䴋#0Ŏ)40iUU}⮤d߲/}.zz_O7n׻#JhDq 6XrVj拁ABA}3m/+ZطQ&9e3F\LͥQG38nK96@\p#RJĥy%TK63 ;ʗ9kE9u^Xg3+v,pcYm;=~]'S!SeW}|P/Yeo[lr$@W-v',!Kg k 4aˡ\Q,ڜ+Ws $ٞsH6/]YWOIXq~ۜ*8Dw%bnR ~j?kGݱS=v'Q$rqiam[8NЮ/dYN)?{۸ 1/0;, }%Grj(h1݌-5UU_y%jo ;aq.*jf=PSa앟eMWY+#O]~\( K$B W]6U2WqV-c4Oaǫko@NG<i ޒf M* iKux^Tc* Lq8Ǚ0_ wӏ&%_QuYU  J^"Iݽy 8K?}o(N}.%B>])fXC ׌#`ndK=FT$C!đڊvj;|E6 `t hk _RF^`ES)>ާ9Ck/mt+\OOƂ4?H'_Orxm zt)]F*dVBe;G!2Ză8㢏 QͽMZ =’`prV/wIt{"@ߎz)0Fc#\!#ǵL&7BJu 2b@JRBJ(qLkeA?ؗ)Uz;f+MYm?_"Yc,g<n6Y,_v8늋 .fӔT6A@iI5N[0;]dRwͽ-vwS%PVtv,A2$sA0wݳ$E.XxdX˛S KwݽvV{m}#@hDLWv_{;%-4]Vw}m8N )WB-JX9pՑI$Dq;4x3]2l=jxd*%\rU|Ugwުǔ:2/Di{;ZnՀE~Mf:C:hBAy{Y* /Ɗ'ޞ'=F\~YOA'ld EX#矼t͌-37YPS)Rvq OBo?O?ӛ쫛fUW J*ՋVJE;y!z 45$w],a[5߽z8![I>h!X{y: 7("( \"Ҭ4p(ZXw*&Lsx+7ͼJVa0E902̯{qc“k>1chU?PvQ7;3g"!D+1zǤBʊ0XzuVL+X1Ph~36sie#z~H *} HEST(ۈ \5 }͇\:sCԑV8ՠWUX +A80RZpLI#$ǭ]Zq1\7Aחu}`EaW)]Mw/z*rK^l \B潪Cs`qrV+E L&4'{\Ğ>KهMm*/,{5?bgUFGCSK*XeIerLqș"!W\ baA󭎷. SȉW2Dce*Ab0Ƶ2S̍MfM5v/.^4>[ )h%*Q]bTZ0S2JPN T2jc*D;6 T[J@=uvk4݋NJlȟ:NTuI}CiEiK3:un>m¶WYdPwt7s/]KL[\@\@$#`gAV'J*wdu`V2v+}4k3g'd]ѕQN`B ƭ<mrK"5S+9Eo/Ho?4 3^v+sj[@ӁPΧ3vM׵W.ͮGkT"[(®R:. ,"`o =U+  czBXb4zA0!=b ¹FDF1UTjA`pT{-OLNG_i%8 ë́-`vZWdl`JN6׋S`~O?/ͷ"">g?Φ7R;5:ǫYMs:ƪ7$TU_w;Bmbo< 1um᥍- \\ߴ$^tkq&Iџ3-.nڪ4@[{;R.ɮj9V4 ^J6@  mtpH}uQyUBc=ZPJZMg ] D+ -\%tAnAE!@1 JSz?۫#OjBz@J/ä:']puځmXڴ~z B=!L'"a7$D*eTx%+(6J;('Q wuuUQӃ.WLc&f)h0|J}I 8Ňq4ͯv -q!,ϋYBv>Ds;^dKUb'q:4OGE|l&@ ]>K  Zdh 2IϧXo~,B >*VB%6E竩 &HLi7/y &(bd(gR 巓9wk8j5 Swq]0%fXL>3$|̒x>g7Z`MVP~TE+5>%LԗS+rݩїB!,֯t`.1~9#ۘS:+g,U~SU@ jhǣ˫rm^m-x L2vQڽ2k+PH֑om> Cԙ-i`*&Ci:jP5z5ZqTvdۨsLQL2,e$ YQ  ltv_lZZniSwu*'~+;Mʯ37vۯx7}?{wwo۟^Î7XEH񓗭$@3o~ТaTCS笫2.s-MlVD\Fn+@_r7t%ٕgz߭U33_>d'L{ͳ%YAT)]2=vö𶳪7- .V=&C-($t+lSwy E!"RzmF "9iN7FG =wu)D,FAǜ,P D"]SnW  |7dȐK:D[VQ޸m#:@dn;!r"Vį*2eo *5ykRs46763N>@0ljf sekf$*jf<ÚL^U̲E=]IXN9cVJE1B]`'~9 BH.%3 g\BLD 0,HjRQL)#TX-C'Pz0 $F"uІ+AlNQ8/ah03XWB"Գ9иth%ǔV`7ʃpca35֏5ևnu#uqQ$R E Q税S"gn=!ݚ~S ptбmѱO :pu|27PqLbP- &s956qq pyzgXZ%Z \,sE^hEߞ~` FZ=ucGEo^瀨kW(b w2j@C)´W#q~#O)7콑H 7r0M~UX߱nǾHz!0),l:q& 9xIn`dX+(|ȏ[R3yՖjutD$m>uCqߘY(d{2.&ﴱDCy yzwNWmK,:LFrY{N;=Zt巻s[=Wj9p֬wXs9eӮBG!^ΧFҙ.!nk&pO@uJt)I5\9 ,\.G.7XX3og|pƿg(?#TeD/(^!Kظ)&2Zj,BK=ضmx~q3}ywm͡Y@lk،@;U٦~L#޹57r+w%utKeGFػWs|uqILQ;e  4SWJ<N\ąI1;!2-:<8YuA֪A 5 a:Om^<'$Sn̓%fv+*I׍ѷ]~5 # BGv뷟U;dѝΦ7&nMcE6K-rr,Z6cEӱҬ_u ?9-1+-UĖ ׷V*Ixv%K18!OHFzJRMAӇ(3!;i9!Fu(% ᐣP1D?Ua QHH8@g9Td4h&eeiI\x1IIEfB.ψBUǩ&g`N{< Qc=Mc^y,b6*_me\i.{"J7Dw`JDZSݠp#$(Ou?ql~1ERk3h[Mɇ^kZɃ6^n]ȲG痕 :'CdN !ӡ6 ^NZNЀQ|4Ll%fPwK`UZ袎.^4zCwL#ZYOWa"l)<͒K71 OCn39/!po '}߮9"fI.wQ 0Ri}ˁM/Ϙ8\m':[ɵvjAW۩l\[JNڵiRL4"ذfpU"+R;J!'\ hLC"V"YUQGo]FO:A\I#)pEbQVpEj-J`r) mWEm$JWE W'+_pdf]yg<ыZ.u 4YAy.䁟 >,ѺN諫@yM7]<_y~M*Rk[p8=Ȭ$-k];n#FuA]1/?8(E^=JxC=z/{{"}ƣa4{cdM)E!^-˙ZFRwOۯ{`ZyVb|g L,\ԕ4h4^ZryƖVʊf\l+AbZ#]l ̴z.FJ5+ !dĵ+Rk;4Z̄ĕ\C60\l&<6 R W'+V, ZŠR+}rd\JNW-mlgb;GH-YcǕbф+pV~I\m'W9m;HK۩4#f-pvnzR|OKy Q_ l"$/Sps8[`e6;)κJ2T2Oo!}IJO{ tCbo mr?H/?-Jb7m- ?f?z;M7_qѫD17]pgaqҗW_=WMz~4Bmnx>8lhaniTB0-Qҷ.k}dY~?ku﫷?i;L5OtHZ&{$.J 4Wݯt1Wny!XN˫mV_/48O׳23^̾Lw/ ?7i?9erpx;KWn ;[ufio]ܱ~x0s`⸝7^ ]ÂDL" y/J*ųi-5JewM+lǽL0m:eU"D=wh(9tOe'}^ 7~j!pi^ҶwrG~ؓsn$fW-?_uxCk ?tB|(ͤVnW,=;M?_~݃oO݄A7o+-?rh(8HeIz<{ꡗCey˥yS{gyMK0vew 5nXDQO]Oy٥q0RGP~A?||Ynw_"))ChuKoJ#NȍԼ{V'锌M%xsyKKg˷'hhp*$YaTbfͤi՚4h&zw3k/{^.&=.fyz{]̥).fx[ޭCn]n~O"CnQi\f,sݺߺ:E41(Cȋݯ$soh~"WV+ZıW_y:'s4Ţ1$UMך%Ga ;nOQKX仭H { hu:S}u=͇ki.Q3ڰ}W6Ok؅ hˍ.Ԛ+]U zA ۿFSwakCrN_-8T|>ˍ jEmT#Ț1ՊZn'SM5E egHUP WERન50v\NWAH0J \h&pl**ǖtճp%gvA+iWEʚmRO) GӒ3Hb5\v`PЏb %56+ *ru3KVџ.*{u'\+5[j+l>#;۩GZjN%lgPm+5jצY KΟ!\ZUQ{㪨D5q%|1pEKDl'Hm?EHqUTJ1qE0 >zVrձDS+Fo]'\ @}Vge_\mUQkqUTf_˷bKQVBA+UF?ޝԝ_ Q՛_~qXγ}ZY"Vޣ2e)ڧeU姖ݡe9gB1(>;,U7M;O;w^ J ېMRrՊ]J]T*\t МR- "D+*jCJ3q 60\VpEj5WcUQ9$ZAC"fpU Z9֢֗+=SClı\l'WٺN>uJ32Jo+=jצp+:Rb*rkWEoS9 Wς+d!\([UQ+F&\"~pEfpU Hd0v\BN:A\֌+(L3*re3`QqUTpuRږw2l'WE W'+mR-]+Y3;E-cUQ)섫ĕ1%4""VpEj-~l 6@Ŧk}֔{᪨**aZ:E\YdFaC" \v" W/W޵0$翂Rۜ :_r)W޺rkg b䂠lnH$Lv3L|==P >7?`aL\ C{]j-JtR ۃ]kȡnVԤ#+f$#]˼+D{ۡ4#+7d]`E7tp7 t({(JX);pZ+jo r ]!ZENWtut3WAm##-E;}nxzY^;?IMρTn[մ7l`^\JȪecV5'ފpFV@i]#t5+@›՝` ڒ @xCWx<-%e+`&p_ j'f;w8teޜbmi_q`j -'ڡdZЕj[S*36yCWW_P Q.xQ#7Y[M~lMĚk [WUKXU1)=˧F8D$Ϯ·%Ֆ;  &i*32mjH9w‰]^8Ce4m(Z%;caͧRll (cTj}mMbM rSZ/'GZ*m[o6Z;r |c:)h0v|mC(bLJI)})8E۶]uwj f1Ja+^p8finǬۓ%ޱX M`|#?Ro-9xy蠣?l@GK|oZCU#sKgy?֬GtDg ',⦣#0qq^S v `7U6Uz77$kY^}؅(T$ѷBM\p'Z=́oF#Z ךкyIfTRh7]7em7ՎTSuwY싮!]+Di{:FҸȧ` L!\ ]AY Qtu\ JoP(BiiOWHWFSf}+̍?tpfe"Jaz:BZPi<+̨? ]H>8]=l 2nU;(Y芓tUjSmmjW]!\E}+Dkt PMtm8|0eB *uBc+n`#B>%"<С-Qy6:O#L>ʾuD3&|O+qFB>EĤ,$5ZDN'6vIdu|&qlJGq$(#ӧKhd0EЉ/ }Je&WRm*j"4z ߏGA|}]^dG yq1ɳ:U:\ pVY(˙˯Eq lXYB̠EQLgsh0e|J]c qi#Oh->[z}{1-ļ{E֭2BYmNY^Tr9. ^69'0]Qf`nut{ٔ2^u5,BD+VU=|p]0´N5eX|=}>/Q_v7,?~gJ̾]fYQ((B_&Ծ~.hǬ/Ni<<(m2˨rj>,,~'}i hOShh]UµYXU~`|~ Koxǀ ^)&ɮ?!K]9U^2%XYx4oh2^GQH *.]@ ![|@'WlW}4 ƥ53֩; 8Y:Fy#EZeh ۀ5*PU1\+VsFBght"8s&R;.) X5z<ܘ͂誊$|vn\OhONL^(j\Н@Cg-h8O~@|X]&b[s!ߞR^ۓ~{rͥ@$6ۓ=5ۓ[^ue88k築0wa"\F~$}Zlp9q~͚OF# xVA뒜F˂ޞ|yɥ+"(?σZB+ʌ\ H;Ak;Y>3rUe{ުVI< ' %OrQJDDDbgLx(i>4H(ʹ`\  =qB(MW2UVo7yc MriXg#Aq SKh(Zp pRȐ!̦,rLjK%t⨦ii({GR|2_yW$q,]㘭V:TX?XQ+l<~1-ͧQ*E'+-+҉X!9-ubO^kۢ\&Y^cVE~񧇳.._~(!ϊs/(S}Eڇs_io҅.EMyR'!TsӄL 3FƸAȔǂǒi#c;*J8+O 42s. iSnZll8LB$9jgyv,˒g-X-c;/`rĒK0&6b:0XJgerk|^`[<*&ߦ 8*Fi(ŮjHĝKavRiXzFrkv)-=3ME; ψgzkQ}pkHBxHX Wi0l$)[;ĭ'W.4؞cJYz6Dz3eUU=,^]+t$y2Qr˞hEl Co~@c1[QR1ځd 40a ^L&g5e(K'f `g{Y\Y hj2}F/ZɼxB 3J2XzrI.] \FGn`98dyŲ?7FZs 4x4 Al5#wCʽrDJTE0X u"zW'f\.hnڦ:Pڤ]1ihd0G@vQ S&lbb"8?R&1ˈ 4V˺/V,,&U*unʶ[o -'QR.i`)/8( hYOёݛ#]dfa<8f"Lnu^-+hD[F: +c׬V"YlmY$J*ǢtbMUnϲY4\Q[[+YW6 %t`B !E,/Ch_;c`0%qh¥:t2 E6;yF0./6P;b~У#3L)g<`>UCڡfh4r-wmQKW̸dfq.e~`jt,F>_?;j $u"v)o=Zv?Ϡ;;Vadu-'*䎫//XS}u0J߇ fuh?qK2}wsI%w)S2񧾸|}sN;ȃ]_M9sçha1OnчO|uI9\=?+G愳#CwM8#Łݽ{Xy`q5O#'޵}x vUwͻ˿;؉e73Soo۪/:f,)ުFuIϨ'48<lkSI8 tI$ b +0 s~4ڠv8TX ˵,\]4Z&-jwP;D-яWACZE iĩNtz6 A< d#j/(Cߢ`ǯE¾^fT1^~;MD4bvbCRtsv/*_{}W&VcfFK:/TRמY탅'zbϜ _J _xL ˷c>4DV}: !N=唂o *IlZ*!fm__ryufw͈.8y} nI`k?)[ߟ6%/q$; JAs8$Kij.RbaIzI,SDQsz=v޳}t>]'_FR.$6Qۨ`WquLy1Wm!s=Qj['[0<ނz ]ح@ 7R $>"Wgw:; oQ]5 qJ 9iӦY9i *B*Ŵ昃/OIr-K˴qӬy[NX޴qm*'Vlՙ->shp9?Ѥu~>5I@j/F2H<|Km}N]Zyxr#WJJs 9fxe٦@(OU`/|G*VrjH@\c)nZr9H9i,b夛6Mׂm!$*G'-ܞ7o]ܢ!UiOQz솋!A|kxMv߼ yueɾ>z_i;//o{&[t<e\Iv@\UQߖNZZjPUE tzp>KqM'Pb\YJ] m2#tc㖒Ij>__CCo\Q qCy2ễ5'l_lχ'B._uK>Em0P6]zut)1{ =I àÃ_~?ŕmlxtφ[\_j@kx/ܳtoكmb^Ét擗5$ sML#“i}f]3-xK_LNzqqs߽܎Ңce4[^E@l .%싈ROڂCznsIAĎFO3*=4u]f#2`Oeݰ= L?ebq.zT8Eб-4叟TpЄP ϭvkd[%,~rڭU2 \U>___\=Bşj&qNkeÕO#>91nOSѸw:Is CnIpxv}/q{5owwj_6I=O 4n!. Zyx`ޭ|°t§y({Yxfn3ߪ9O-|@*LԍG E*1{^YdR'\߻~%Ks9W MQ)JY->D3wB^9&W׬Z\HZ<њ>WIeWkPA^~n~a9{_w"ەnTx%hW$\O JڈF 15y\3&-T;*ڠE| Ѣwzc/e|{GnuA@Ņ-]c}_ZKsc]^|+PD˚;S3#ܐpgvI6CAysoe{Mxd]4$,-pUƠw@XLy$,$Z% ".9WHLbmؚSsB+bHASIQChd RflZ:biW aɇMGtEHH H" CeiFJz2aҤoٕU`ȶ >6:8`o2 6'J[~q1bTGc$pM#,e #̕N))\O 4*uh|L<{ 6F#9X`)n;@ÁїQGf>e6 RX<guNkW,"a EM} =xh\,A:izH|jJ5wM۪P4)ܢ)>cqm ]9i9X 爿;$OA/TͻCjӉ5ի2XN[G1\aQ3 =5f8(vY5k>6&D!98<B kƽIg0Wlӑb>c:cQac,=/,*n\^!U;QWAZ C &QsFު):+xʪSh8G_8H~(2ʐ3bD*=3G ô)..>6|?|0Őh=B;63+39n wb%-^TW۔$)h_ mxzG6&+w_GXf[Bn,RKc{4)r&0E6k{ iPrsyoj4^%*\$YUMݠ댵D&0Z9hWP! 1*U u*$,ucI%D0@u@B%PGB "F 0"@eH03 nucy!>YD!.hDy6kbuN ֍uLg2tEFAInuG52)chiu4*$2t)Ġ:W `Tp`*SO 0uT= XhKjuS7P"TjHj7$l>UbHkV G|ң!nl ̪jJiVwR+ԙ:_eH*6Řكnu2BlIg6C$scv{-bfNhSclHJSx!֡#)C/|-z/$Ba ?_p)4$XܼדK& }fݐM=01kv{Ն$ g)U:P:)T{! ͂[7q~[C1!Wߔ)sdAM:Lj!G.7jҐDa  ֥Aq,i"g\5y+I [ne $ANuf7"{$`M"r'1`<1J" rϽh!ЅDS' .׸&s&c! ge~@d g2,ɹ3ޘs(97sg݄~Is>sŭ,eu`V{!ژ%\JV krPzLhƒ[֙wƬG|͜C1ay_dP$}s#RqQ,"V.h,&}(Z<ΥG$9̜ܺZ#AA1 g8ZonDSmc@r ׀~Z2 gr R|=SK4TŠx:٢}:7͓~$&2v`&v.dE]E]E]E]E]E]E]E]E]E]E]E]E]E]E]E]E]E]E]E]E]E]E]1f1hp3 ȔKy+g>Mxץ.{:\3 3B\"2n 'CRkN=yU$:i/^߷b_R(uߖ*)f: Kɨuw -NQ )ndG6''w=]݁]tMn(-)W.XV fӕ+9[2{FFnwu=R|,&:ȭyjH|н(^̍쮛y_,<6]nҙk%85mȸ̘tgQGSTn?Qѭq%Fp9ײV=~{-X3x*@ EUd,%䮇ZKx(r{R<'. cHJ.(9s!,93N\Pq@G!$.cد%i=#ٞMLRWm |0ptENdGrL 㾀crc+n!);FSn\U݃UtE ƈn|LNG\{=m?2v>S1U>愄KX7à:X]((mJ%k}י.ed3, l4鉀:ͭʽ2qXDY/5P!L; '9,ޯ8"u1V%i@.,sBKG޻xOJREm,f}Ŭ7sdm^y,C!jLNM ꣃM*wQ~PpP}}Q;3r4&>ɭSR>0L,fzdq96ۆ7r 4#+̐=INV>Lط?S^6Ln{!vOSy mya#ˬssf֒lOq'TC1lO^WhbxQ5e[n)|OWRhѰ¾O!) 1(ZbiI5*+hY_~ۏ>;"2.LS) sP"rGIR1j 7INAgNI3;,9M^orZv-nC,R{f}ۢJߖZз/[bWna!br_jChC&Lz4dcӫ%Av}<-GxGxGxGxGxGxGxGxGxGxGxGxGxGxGxGxGxGxGxGxGxGxGxgꀘ`'az@s Z$VgazPտngEVA~|T+f ](BY;Ρ8IF1;nǘFi(jW 8[ +竮zC?33ǓK\_qCQØ88_٠m8.>A>!ȱΟP4/#!<vT}⥙q彦GqyRf[ĩ] 'o8=Cr3t5,#!9OyiR6 |[Bf˙\V4 JEm\e_/_/<311O[5yh_]ر)^)VC1]Xvp)}̨ԾQ iЧKʝmm [* %nRsL3m10%«#,wX,'aE979ޝ.&t6m\}l롬q|jPY Pyr$M TVNe{w[Uw"W}C%kC7A뫜mvD0r}DcjZ;TUWPE ) rt9 GWq=]AVg+ kHW5 ,Y=B\ߌ0䀄I)|l=0A@@!`Cjl(\ ,I $0HLOt 7Clv>XA#6Eg(~|r!9|^.h'l}% ;1R\pLNnG1/;Gwwsq2F{t2fM_0UPQrLCa| [L#[Lue":@`H?)`;n/+~ ~ieHPi<*!32r#.M K6:L\MD)iY8_%D2Lza= h<`Y8I8wGws}c<<<혞6h;S<"UThV[_ 7t3z5~~ O~'~'~'~'~'~'~'~'~'~'~'~'~'~'~'~'~'~'~'~'~'~'~~>,}QMo_ ,][+~o ~;5lYU+3c|4$x8g0`ϞT8J_#G`t2]L y2{ 'ij td!n5T鰪>9J"'=9))=a'ݛ=|;]pC pw8%I,WXY0Ժ*v$t>t|~ vD]ОaH'qfAV}c#?7HnE2&BuPcr۝_/fX|O7X=(~mv]91tH?N02VG7$̌+YBx"?or~[lN`U+Lus,rCXmce!J!qHslpTn<7؜=sĖk!ԃI1sw3FK3V%W$~]88lsnVlƠlfx}kR'xkx#^'6/7n9Cc.m1SίLuqta rgiq1u Maǫ\bl=Q1,LkSe#dܛKf8x=:WhʅTo6!Uū̸ryk7&bTy=0OEͼ߂m඾YxS|,V'/n6K./dVFC:3 p/E}?ɬASpwӬB`Lϫ9_!]bT\[QpZ6Q,JVYt`vXzRlSstPfcg o/_5_ZnTXvm&Iͼzi*[+UYەR&;$d.û/#Xe'OB `q1P`-j9wݜH:.n몝*a2-Z;If8-f23xj%E6 &K-NFN>٨hzO?Ӈۯ?}퇯CՇ}oO0s'o;M6ggN4\oqX=.>}y}>:^Mjedb@⇛ QiPBɚ=j[a5.,r[?1.+bGeP f* ͵ D%]R-0.ZOvqc&hܣ-X@0|IgucS,?t9"a29 -tkf`?ܓ>,vx 0L#sE 9f qN9K-B_<{SGaöQR8QB+SEp Jk@p'+4w8Z{Z$WThؙ*1>3W 7(a|A|ϩ|o>ˌуŨi)/|W {R)%``͕PM>.D+&qGx3jFfc.2$AUPĭ?/+_a} aݻ' щH@u,d^ ZȔ1gLJYnfy$mE"H~~r e?{WͭNN߶Â?<7MHI&h%-lx# eHlb vqȀW+KGHGҟ;^w'+;SA~ZNҴcʸsm?l=ymmW>|èBX-'ޔU[Jju)^߭~k~ڟ 7bʿ}w޽VE:m}W;/ _w>o}6 |q|wU owxz} p =MG2?3'.S5;߽.zځr Z9AIJ/L"!4ymqjʃW[Kc #J!qTM,D8 -:)WMAϱQU{[ó<7CN`UCV#F{wЦzg"W٧7?ދdwy2N5pݗ?~nO;حQzVrSk ߏ]CӎbCW&@q6:Fv?&.PM߶o|$I;|jhbZ RvVvq>܇7&lqܺ6<<ݢ؄ -p~~ӯ;ah4+ SR3ԬW0]%fPaz}0ިF(FD8TQjfal 6J@5XW ZVcJ79V ):YX!6tҵ97 t*TrCWs6=F)t`MbhmY \1 mKʥ^#7\,uVUR ]=B"+NtK6tp5BW-%NW%gztER] ]jmA@kW%FztZ]oa]-\6U@VJM6ts!xru/Iϴ]V};aM}o*v2\;B] +x[e}?K#^.H_ F UAi-)s)%˝N_'b]/ccn\#\ N&e<7)DE2Jh2K#K:q⦑ȷ蜣VFsj c@S[381p tP.1fpSNy'9%-tӎ)SN"X;5'bӜ;\RpF-e$=ZA"-e(O(*9_#+>.]րU7JYo}&P\uZ%_ p^J tPc+b*,q pڬ :]EaYxt4&]k*.th'/]Q (1#+^+Llm pX hhCWlΦgH,NW f졵>b1Z< ]-cZt+6=Z1`!׆\ׅZ-WB zg1:]zc >JTk{B.4Y۾N뽟h5ٸm!󧡰?reIejRf}8}zdR4d.+ynh=U[Hn s"wfN 䑒L[ FJst\RiЉk @LPu/FL :}qf HǴz|qE0O߿7 [1dP~yL`:mҌ< (Md1V]0 e9[m&rǙ'$WAׄ\w{1&]?hu1 zP.荋:hA4_ПC'莿^rǏt:eДۦo GjԤibnv:nXR_o Y(pu_M:WV;3dRf*דv^Wbǚ_]cA,v:tFQAFh2gK{Mr>V4B8'=%R'Xq gG@}x Wjc+Z(d>F+cz!) e`Af4<]n1 Lrc lqO1 J9}#WܢYǾk0Vw\2!^6Vl:?=PD fb6$D*%X-q;%E8`X_U6r?gob}B*h \d9Ϥ#:<JF=X29条a ax.$ypІEQ(})b\ wr_煰$'/>9fP!aI^Gy: xܚ6 nCu|6Y]ڿO%cIwe?dt$2 as\z{H +F)t^XC+.FE><V>3ޫkT'xECre,T?'?sjӂq\_n3rgP˔GA€$Xuewi8v>c"k5:W^$V+PdPJz>޶+d9Lz>aŸAFh= \*S;'L q ]e|;4&)`Pz>5_v^'uR=ۉkAsNXK~D|fNOtV6dv6 fRX 9߮"A Z R.ϒ ]ɳ8-7v+A7:,\g7'e\{ HȘɋ(mR^g7%+>8{^}{×G{Gg?=;:x #ҿ>̟lτ ݝw~fŲYSȚey,|wMbg\ ;cbo_vyѳ=NlpxEP]]_ITE&UT1#S)D!|ZU4kgeo}TcE956\laB9+LGZ(TI+yoN* 0rNL~IU}LbpEl{=SH{R&Ja´L:&2Ln㖖@ҫA)F+`vQm0vG9^&@8\{@rLiX‚Dykvٸoxf-G[Z` qብ QNJ10R+G c.6ܺBOf[ֆc7;c}ף؄ +T"B5w^ًSevL]ZxhX\s b|!E8eZZt4mN2mjiuSsxMFp(L@7B9a\e3UE]rTNs-%^m?qEOK)|IweXW~N/4a) 0թJVte($KBmjTb0xϽm!5_OgD(ܙ^Y}ش1AK\_+jzЖeBm%BۉR|%u{)"=G!i+9l䒝v8)e/cy9*#šFҏ&p-{`-aDMYF:L; %dSH[)Kuh\yk vͶ٥5dzscڠwuNR:\&_InG6tqû<>azw w5;7ukkWS&[c[\sk4o{qe=nlf0Dj=];ﲙJa:oqwYq:t%g{:@5eCzQri.#B3/ӿ=WY&OZBmClNX_aAkf1c<_3rկF$8@o!G(E 2:Y!BkBt%k w$q{i\(iFFo(q"6hN82 N=|,m _`6𒀀? 5iq"嬦%HrJQ9mNG-xaTYkeWZ R+hGɯc0-cDa)5(Byj-1MA2AƎJขk8FRYQnLu嶇s٨{|="hvvkXOۼ͊'iV~)&K΄Jqasb-,R(R24Ϲ\r K("̪D&Q(E,]JqEGGU ܪ lG,_ʇZX}R@q?6\yh8h5?4J2JAۑIad[`sG|)966k_~ˢ1BBެ3l9ؤ2t[N9|YpmRy ߺ;,^ࡽ =N/ H8/8& |q:8xgE93"%p4(B'l0SSVH颾&2c^CRq;#UV&ԔkTiL N~U8rQÖyA O;0ރؕTq5ԇ% w;L5Nxl4 EKsX"A,VGP̈bKuv0 ~DyboNsgI 䗛Pմ8VESA( DKIbce S{5R4FcE*9㨳2Y _~8T2e190 FciB) UVGtU&;nj38a#wRAjfpa?gIfK9ƾHIy !a_D"K)VK(H>FKpGk|s4WbzNp^rTuJsR 8ri ) PU 'xr^G$K[{oBgV+t~@]:ACo&V祓&H€64).( R3bÁ[B9M9"l6q(V cE m:^,qV{!g? HDLpEҹ!`^(xPf#L LjVBzVW$$1{/F!cI$Qv3h&ӚJ樆cj a4h*ÇYiS$j$Pt#@NYbt2j%ƮHŊPlU- ːyk`ɅAtPGYv"DZu"08FŖXt0++,SDzl;gߓ>s$c__sw0}vRuӶZ2LLbP eMw}7 GV#:*=,.k<~5_ˉXn.p%J& o~"W;1Y @FolJfdu1Z0l"S! zoT$Թ#cʰ5$iJfs`%<=/8x'~] |'<2I\(ÒPAuNaf%eBb"UL7ƦX. ̉ nH0؁f[V,q/ AЊ`Ak\a|)6KbPJagO2$8jYa :^F0UZU"[6n4aaL(krw;Z9a OX0`դZR*v(kjL .!x.ccoߢ:A8(+<[(Ŭ%K)Dd|FjiLN_qca2 z <5xY-4:+fY b2L=\SnZ'+up Hv;au+YM/7ke]ڽVm;tR)`MکML*@jBFDIw!BNFCgVJ^-ޓ<_3tTLGI&UwVFϹqsKT=LqV&m#gaǹ5I҇:TCS'o:&7{LtF܆ϜP-^="jg32M GQsӧ׿陲k>%9[WQ>+-GIF+Z([zXBaNs`Or (. '*5ącﵣD0fg !Z8Bie}d}6'o퍟z=O(O6ii6Imu&kz<"k_*| 6ZpV)d{aŤS)kjȚEAvQv n9/Mh; n6jG"yiakk^GU ,rS2Ԧm~n%enN@3w-?5|'JYn:KNψ.4F*˖W"U0`O074*=ruwfBqbY5e]Sϒ>t*tߠ޴CYh-qYF9ff. 6L: Is3?B'Ӽ zmv}aśNYYA`'}}zvomղGw}eʖ-٥CrǙpfUgB\z:*VNZ|N [YC **`K MʒbP))Sħʣ,dm bcLA\] QOlryee>:93gcPf=r{oCt  󂁵QR#yOyƀۍt71\$'GAiXжj p(f]Ʊ^[l43$`fmO0 I p >+:ӣ:> #x$:An~!>ߏ\Nq~xv?j<,&R̬b?)Xex 8A{9S_xvQhg/J%<[;0e#T{ E/eEm< fa8`4E}(|CK_o~=-UBUo*ڏ'ި yM:BKR eܘ')\,0)79K\ғH.; rpgW`\E9ʓH#+R/K_"Brډ`n*p ݨW.= ~Bp $O" \Ej)>vT23+Bf'W!U$SH-CWJNZzp#R:!Ku2p=c;vTb3+F0 KOAz~|w|Eɜ[y3RRT0|8$0 d KdjpFq$ +ǭr3THB;Zy;}cХqʄLk2hjU)'H0k tf7l.6P _Kc-2cSe!@c Ԟe@۠Te)!Cl;\9mSv{5gS}A f ,pR~tw)aRI2GR-Mir+|I3D<"p2Ѱ+./kq%UQ 5EĂҤH,0 Z NQqA/}֩& Zr0u,/O>muTc^2vqgCse,:s3[ܻ(x;uϺ&8#|ٌydk+UWz]~4f*}x{nZѪ}5jTI= ,>j^'ZqqJنm?˓z\XqVğ|O7&RYc>Re)JrKPQkZ Øt&Frjmk.fg\rU鶶.Ʒm[ޘVn1d9QsY!Y)}H5Պ<^gI] fȂkҺVڏQLOX} qX"l%<ԧ,:eVy%Jv[VAe#pAIϻuhvAնvWW+J{TÀF0_:eIF f<_$y— T;qn}M|ϧz;lz* ?mLS~1 6Qq{5UqayYUinD4C"|3Vk(_5jO' 9N>|,g_ &,D=8~y{ \|I͍c!XI+ޤK#cRk{f@X9" QMS4K$5R ڰ6xuu7 NFsi)/(& [V^->tqC=@ڴn_֊vb| },jVcZSsI/IT-]Sk_nSҪ/[!ז0pPM1i. ϱv7"&77|Tf_Yz/F03st6SBWߠL2è2 @6Tr< y2&l浳'a/`m(&\qo~9SuL?O%#+w}5;>*M0$ .K{߯24Pt,c00z6&yn:Rܝ|/ΰ4J"og s<2UqD[ٌ+{ؕC"%h!K:e(>,`"7* iσH3i\{FR[ ch>Rx%*"Z>݇OOag > $Xc|!u4:|̅տ:,%:?g΂54 kQ@QD2n/K27_}4 V{.?7սwoKCH% 죪Wyg^θiW@622"d}=\f{)efh*^ʸ1mtO`DV2/ZY'h$=o y֏:>w#}mgtC,1o6c"Ṛ~<.AH`nma LO^.Ç[yx={<:USʦm鴼)5𥗰sϓObSg Na3KfdSɇ;kEwV{GV8#.UoQ{oX@:ʅ-Ufvf.o`<Lz=^p %Jc1heX!B 14zWi&V{C,808  c5(Ǫ)08(Exwy#g˓8I;0C G{ybGw xQctaW\!)_o4(Wsӧt5E=d'Ƿ뇷Z㡶烺 lP礧Df# V]` ,8VIwTHBU}/~XIᨓG9_2&/)wI '2MG_'?r/~BjOc(]Mvߙ<%1:0S""[|b~)F\tһͩ.T]Ǚv`({.#nbd󮌈pw~Qtyq|Q >/VF_ -q^퇗3]?_sKTN'݅G?o/^O7 bPy՟;\]sK9Iz&v Qom`qg-r*p*t!e4 `3F1YL6wh>"L$Hk0[fg I  wbgi҆C4QW8DVի-xЖ>=K}HVkUH=ziZ2^7N5 g˾4eN 㩽][oc7+_6y)fgd^ ^vKvw`dYu9)[N$*jelEa|k뼩f ΃  *54Y6:gNkhg8hg~hB ўiw|7mkAUbR B "x0#CCK BjӱT\ךt9dr7SJAhAZ'͊YaUPcE_./}uZ2/>gs۶‘K>+{v k{OWIDƍ2Ga!9<m0,Ƞx'*Xou0Z1N*z!ڪ(%E;K TPyY J*g& "f )*%*t~Z}|քQU19Ζɔ2M)l,{R-)҇8ӳ:lKϔN"H:q$C^0B C6cB e=,clk0vcG CC ߁UP=6쓝S]2]V^z<,ŷv.I 8T T@ 9' (F R8EDƈT?ú;3\U3 Wu5>]6ܧ\S|~)+'g7F֜w] }ɼϞͯbkBuPb{\5 x,}.-Ek]͞6⠽SvaDHԾ . lG@ƐbhG@ʦq&BRL,쩌?`Cb'Pve7rT!GFRTRA/`3sGYbh3DSh3uX̥QQ33=փDoc&QK$$A4XV>Zt4{Ϋ?=S(b]ET/"[9t^[!7656WnfOr}׭'$r!+/2,8iZos8͖Ԍ~l0l櫵{AϷw>ߡ祖x4ͧcG;p|-K+FX7ÿie[jsgH;xIQU,ހڀ -! |xY[}`D $m BQMT)kz#BoD8 #“<+;??I hIp(޴H*"K*jc-lT:p:( `&1X{w=5Tn:(s:!}W*%6#05q8մxi³6ISJ3:h>#;"zȬt53U\C}fхJ"ͽ1aGSZ)%+UnlgͬI"l(xg1㌪ Vz^xi2KcBǘCfY0D*U5q:j,=>ݩAMQ >΄Y&VC{! (2ʶE Đ9ax _xE;c:=%Ѣ*Br.`1$ fjjzS^>ۅ[z]l}S)9#QqNJ}ǍdxSJz.5O*{":Bge 2QS*ZN6$vʒ62 (A*i4xq41rąI1CZ=3愤1+IGhY&&vuLyK9ɓg.N++՜et;K"?2\^#f=Or'|V31%1Le9 YA/cDliJ7X&#ǯ&^+s8w}:nסxj!Gc(TvRv>u.D!"V2sÜRr*2:D4JY&Ӆ`MIJce,b&"(W2%N$ xVz= bfW_ $ , ʸ%\(%Egoebb0R6XrenG<],.E2#3MR) g+s@}e|&c]10Tƚjzy1B;AAp< 8'hqQ esO=w lJeT5q ԖNǣrzϕ*[oec#0HA: >%$%u`RƫQ 0cp;ީ#dsN2؈A g4F#>W-nqo RC vX:F6=YxGT 6M1"i:b($ժ/rP ZcyuOD]LoZRA1Zh iNVeuRWoӫ,9U5d^H=Fb2)lK:\N@9`!fafF5A;oIn8 ,r_|3Z]4ӭPC$H1E`~ "h2?}It4>ok7uf ':|O/Λit@}ڬo͎A$ҟ~,t +vv?=xvfI/^L\cCt.M ?`턎GiyOF;<س7gy6TŜ)>ijӹ^CՁ/w-;D+7.'GAoƕd^k|W )Z6La_-۳RyڳfE0W]u.m|Q9i2*qӲbx9=f瓡D;Jc`g/? >~V*i|$דog`89M,{gq~tlPϯe'3tru4:OW± ^̻T,(l#Vi[Ґ~R|c2}f2Vx>*[1{$J)YI?01v[Ůwݩ<ԜMtM~UlIكCgwetQg RG58*r#}NW;te7+ˏ"/ΊTƨgD{W : S4s۹iԛ?bI<$0s4">e=qTyCMhx,#$!eJv(f)m,qjn?O710SRT'Qs%Tϔ"# '-yA e'P2K‚!h7 J"F]}غ ,yw’0|m_'n ۃE :xVYhq4H(U$QyWq'+ ܌1nV|[$b{O?8@s\9@s\"0@sjsrs\9@s,eȿtې ȹr. ȹr.5Z%>o_ܼPJ]d[_/)Rj/Wdړ2}*H*[mɷ2 I2/cm2v'Wy(v5)rـso/"D>9KnE8?#` !p8>_'9K&&E% kHȸs̩Td6s9=ܶn4cX4Z,ku|g{<S򏮌oZtHX=w)ҷ4tI@!k^%G.Ȝ7)b]Ck`JI]é A!=,]ĭ.^(i?oe?;3{I)Պq?JrUD& xz s&BP4Lxvč{CϽiYӣ}S$ve;lm<[7Ƿ/Oe?AYeYnܜ\Vw-3 g%E.^XPzWHM50s 2e+1H&xB8HS-SEjZR/Ŝ# v,iYҎ)ig-aGvևU+WUe_Ůo%4U%m(7ftshzOBy% Gǂx̧ns5z3DF.JI鰊)J."j4c;hJ·.JY_8%xm#?o3֊ ?9; c1m)syCp@Ff#6Jς`m)OT( Jp'9n]m0{.<&ezzMn܌_}p P4@VT?߿M޿|4O{~W2;` y\Aj,q~èG憹%wz}XW 4Ja=Yֆ5rl7j7] *.c5eA'U7MzŴ|3޲p&IO2Z>׻RՂVԝkREZ_n1ćl<Ժ P|ƦR7)V@[G]-l 59._=a"U4a|oYs;KF)-ԈD2vG* ;DO ;b(‡X>ams<@q;7- fU%[liT@ёq-RB@8(S*iƠ(`Z)^&Q1`G,z0jTRw0 8*Ex3GK|_c AM=WQ Ts{]CַhCḃgHl᥵ LތZ75pI;_c-.FmI |9׾j9iOh@ Hm8`A>,$N+e"X06 k(4}8^jC2`@Rnid],#xDiBuIXYIHUٝt^%LXd2}ߐ Sቖ8FHDp*mV /@&wE`}Eaԁ.rғ@}EN2A̳ u nPDQD2qk ̺P]#̝õ#}7 f:*k}f֟sU(b8߄\+'*>Tj+KĒ1yK1.u$fS*LU>0?f\ϩQȸw)z[B-| bwu]-˹] ^ ~M+k~g$o#ݴ CڇauUf"`X/4PwŊ-ԌٻMw={Ci1$A(\F\Ve8vk0a,Ҟ~a@UTOgYoP.\ot }>ot>~ +`!O޶&׃  ?=bh0|hCSS7ϸGnhd*sU ٗoĥáwL'ŵ V\᪗ .{b/T)—J(XU-rUGk n<{+&X0^|iҳs~{n] Gj0s&B ,wLQ3frp83P1fԝ.wGDsсǽsȫȩ^29HFbtM:N-g>~nĕ? |N$Jg1 "8g eH$2 wȸb8h[8h!C28D[vmOJLtp G/XnCԐS 8[a)K28O(Vl3`yA;+)dRHK"F~!3;q3>53!OHx'P;6m2.y¤3yE(!8˭&Z4IkQFJ[Oi=8vjJ' AS^ bpy C3(JNdU Ec3rHxѣaQۋe^atU@J=B2U_o訟DO*FLYkT2rptB#u:6HŝZ8)8™wwA۩%I֒OВ/Ot p_S"!H037%^i;ƑC5ՔrA5ryGp*BYsPUΈQ{}g^wW,7a]=-xX/&W&]u]m Z.MvDh5Fj |2vq^0Lj $];}-Hmc;;@h0p/Fwn䨀g A9>_0JXrND">jE-lpQ$KAH"Dbe 2QxHԴ=d=*Ƅ$ZrL47XgcirϑG8ğoԚ:3ɝ[r.:n!Fԍ`pK"ՌP R&+2'.]7W9R-{~h6a,9}F͋·W>EWJnpC}O7/͜*V~KӴr&G͵^o0̧ۚlVWe,0y:>t({kiKFV Xj*~ly2gV_e%9JںkIղRZ]w"<\E6><=IP&)ڽrV1pVFoe *!`kMZWh!n %>1k\6w%d hmTyL'p;r $$gaR9x}X@yc#)) v\Eځ͈}O3+ŵZndHeyӖ 'JjE`s.euH2jS{ciͼ K0XkZ8k]{0,]G??><Χ9͈XlcUa|:ڝMN2W^0& *r!]ELIMɈGikω %)-VY[QBRdg W=b)@Ed?@RY᳸V>(f߳g)ې]V^ (~f݉OPLC5X'kᘫ- fa1H+&gW{QLDDs nqTՉ1rTĐ##2ՒMTS!KfhpZ)iN[|^übJ Ax+NQ8D2L| NvU8rQ㄃{w^+dS4(ӃGq"D;dIZ/2&|B&E KN5Dyc{MMLtX k䛣|$ q8/1f*(q"P JK")p )g 1pE*_3T69_ \"D XbRk8[r,#?KYR ?OIBVF=c + jK;~Γ'=aJ;!f/1dJ1F*F4 x 5p.`1aDNo*7I.?.g{):9%"5{)ʛo5S+ }2\ I3]g=j .AlLOC-N^FIRqAuy(ϓyDO@][sM\h]it];nG#q?"c';hZn_|.vKwSWӇӭtZ%wza{ wno2%BhAU)gȤj}r.:8ꅐQA;rݜXO> Y.=V|L`P7#yAEќ &Qp)$98NAs :VX& 僳՜e`[_ۿ5b%tخ&uādb*"Pl :qpӼΎzgL %=\SeZ:&8%¹[1r/VlVc@R(>TU;WH3ubF@ݐ1m)r4hz@Q Jȣr:׳<$:2,0jc"4Q@t<#0HMto'Ţ8B{K&Tft)_.D5ToGpT6BrgI5D"of> pF0< 6m~4וd8 6?g~lY~>=I5hB WM|'+w|u>]ϻv0y.b|%D؈xUq">U׎ɪy>'YkV 6wuCZ3o(^bWzhT-^OC!VE3\VAipLkZ__g;jG lU k@BO7ܡat;Sԧi-^>1!~dUa0T8q_{cbl^$|6i{Nb~aF%Mږ\җuͰ#.a_8y3Q.4m79ֶVg\ھJe}g3"HnXOՏQU#1FCz@akg@MnTg[Yo~\ Nǿǣ}ǷG?|< 󓃵"{۽`h~UY܈-dMd״{sXf͉K/wf ċϾ{0ǗRՉWNFp˄, Vq;6B|o\iT?^0@%6bpV5 G*ZUep3ʃŨݠ3Ǫ)UTqv1o6Mc.;*K1|!s[[X'? G+Niw]p!Td D$a6DPA$)KPN(Ht>mqgxN¢<%t𔀂 2%iTWgNB~%oı; TluQ'D 6dy6m6D#Yg=pBAS5j!IV)攢 +sogw9c]α{!㎐Zfp31Θ`KoQQ (A9)QR`S`> e Qcӂ['iEI4HSEpB5g=ݵc;]?u{ pnZm,T-TT >Lr_cl\}^=yG5o)UW)y|ލ? =|sȝ u9mob<{I+ڭ.R.&5(YnkgކW,37Fƀukzփ3 s5ЅpEK˶~<f"2')-}|U~klE>peh0n56PC;>kW 1²w0PX]"Az:u)MWg¶"{a|8x0?lT{ յDavZU4֓Z_O!h3Pgw2.Cn' daΜM,DJS&|ƥ>&㻠ev rkو+n_"^hFQ]vzj"ĥT8rǏ?8/vH&@ 6 б]F.&?ȍDQ|#qu}񍼣,w#G%ʭ 20D 4P’s"yQ+ha3%Y @&SٔF&v'Vyxxrk0,P)%ΖV,|<3 s<LFe5fA/gtJٻFrkW?7ܗyH&Hc<jQv.,lSdUE|sR[=:[x<5t}fۑ]w\dGύ@+:fbBv; _!À}DtsA[it.ju6+ Zn}sݝw=z^h}=G-n:>>k~4L93 ӏ8X65j[oΗgM0ڬXX1տ/1<6:tE27 ![tF$8@o!G(E D(׆@i%-Y4p/M8g.H(̍ hdM3kcJ!g -!9 c 93B8Xys7?? O}q/[9DN8oayN kPHӆ t2G7+" R+hGϧ3xqq]Da)5(Bjyd\ql2A;&q]qSX&\0;oGԚޭ}P?mqNzyF'/is\8`9nFp53(b,2~Β̖?TS #%aB0D$t=|aCox.XQ/6됃z$ᬏ&P q!)^'hb97M&?Bmw)D,2BUQ(9M.JpJꨈ*vINB$EBj[m;tE͑d#$~!VK'MTim iR\PPI,(1grrsDlQ`S!Ɩ˝#Z;-nY ⊭B/ q ! A|bN2)I`yAi{N $2%#Z m H%DcDV1BTcI$FMvg4Z-a³>&O'9Av>sJT6&#$rrQs-0{/>:9ߤ%#,HM3_Յv}tlm]q#́ '3,-47?lۙxzP9ZNg0ey LhB"%O[YIklI"\@\ZMyK""}2D/W *J )jͰ(o킉EL 2Daqp-#QaWVX ezβyOZGY??'7 3GKOQ7m+[Ĉ|ڵ7 _oQ<~T1k5!^!B9Პ/6ti,w%Sk VMXS@? ᘬ^XGj}JG7l#e:mN}Tiv|Z:tdL0&45-^Ĩp̚6精mq3@?AkJ8qLB s25.9(Ҩbn16ŲQvQ`Ne\pCd4g°b{LʅV "L9+/ņZ|iW jh+T90H*-K8/$RhLGb Ģ# 9b951*j5G7GA 99UDuL{PP8vxj@΍Qk0<ЪQݲqъ1Էb:#El30W˿/gn*Y1]8 0 ~;WϯWSs~n\6zHp-joJ|F7w`!Gwv&Q2*UqbݎL #Oa_lƷ4_΃/-. p!ﹿE+m Ypo]~s>g:i2ýg<޷_=7/t5ۛJ3b_7Z0-bg=hJ%҃{V1x9u;__=Ռ⃅ ;oC I`+JTaՒ ZZEZq\#>Gt-aAC>8/raV M g S5ה=$J4\c4]{gt:1aecBVSZF㳶oUA[x2|Uod4-0-ˉYԄ"(C\k 1-%iJjjZZKoGA/bHlb4668%I̬xJP!{PR6"x`R:j_N*$S*s.,>$ĜbXJ% -[V- #' _泜G?"G=\y5DU_`[:`+u) du9C.1Œ{OB{8s>ÛrqhyrˆM2Q9NFne$(}:%6:hqBCQr|7BX||Y_79GPVFlb¿Q"҂Xls(K-w{ۓG'r Ԟ4j8l|[ڤc5=?K/C> tU-"MYm(JdGaŤc)KjȒFAz1g GXhlmS(%`/bq-Ⱉ @^G-UJYzfΓ j{+_+yE9+\/i F77+WݴY_\ Ɠ[S.^!Ut@NX1aq3sty v96k]z2hg.N[`{L7u* UrsrslS׺8 H' [~l=bG7 ehAt #{qJU.!-F2`V8\Ѭn*G.X&ڵ1mX^%ԫҴvpviAmK`\.jrIESBxEWn狞 IS3C{CVi3xLQ&Ux{!tǢ{}ˈ-;46;\ns7-q{fC>rᬜц Z^;:6{_z+t"oA{`ūNYYA`'}}-FMѮo ewlYXL;U-WMǖk<.,6v|#0xwpienz+2\zsq0ʂ&( vHGqigBX'cJ08JnRi[gmx k b!h$vEDG- 1X`xBژouhqaUy0 :˃0LG$*0I)g9ke%VA gƛqJ-wU;ɛ^Q0>-7/Jw2 'Qru,lqlE1].w"=f&I/m&P6𘚤΅ȼJ1mTăF)Mo%! " 9TzE6cP>ăR<ģK5G~?硘wя!OP|Yuj~g_M|W~ͿR>\G[k}66;ƍ$+~Kha8,fffL `4ɦYr_%YvDS#$vW40Fr9X?~v{QnR}vfX4~ aYf:Wrn^/giq|7qπJ/m}^^uF.V|[-߮6T'zKuaVe-n~ǒXjiofh`@|KPknOOz荽/V*؜WFڊlǣD蛛.N\4&` <$ʊ\HKYƵRRG2_NMQ{r^NY)˵O†pM+ ]ƞPKKPi!xZ\8 k,;K]BoOMN&8'Ir6nE h{{\(:9D3 Bre@*RY9!,p%((R :bvfmʘx" ւE++U,"ڎ+Rm#ĕ sH.wI%tG+ŬH6! 4/(~ujnV.3o@Pd,zMC\_n?B[oY_f61XÙBM) yyu/_F9z'eQD|V3'7&TEw^ߜWH=J |۽o>qcЁQBjܼ%1:hQ #A3hrױ ڐZm!u6G8hgRz7!>}3TäTo|}ɶ ; ؒqFkbYS[U5L&pcG$ yc''s,y9nXȌbYV- \kt1JpWAږ:@P ]bVQ }B/9qF4"$],S2rbQl.Te{] }<1RArǒC<:xD*Ӡ< grVH][W-5\k09kBӓY_ɗEdg6HDط׼KgX`1ʩ1[W=4N G UYؼϮ&Ywv4A ƏH Y0VWavÆ8XYö{n=@^k2;WBFQ.h! 3p{OCYTkg V1AA2V k7ոqGP%M[@D4- Ʌh_ZJA-MoOhׂ[{`\Ցk;y=Iʊ϶Wls-!Xc,ɕ2\Zڎ+RiU#ĕ JhO- \Z2;\!2 6;\kuWHJm/ +בo7nZW`(H%lS}Ty0{E~ȚTRzMT$8 fFgr`:$ܩE%l*]IHyBCRe6OREg`S N*3h&:3 Sݤ" Q !C:m!H1gP0<\\ P"mڽճJ[椌W$Ń+m9c&\Z/R)u#ĕƸ+ctI&"EҊWG+똁Pe H.WJv\Jq4" \D+O2$GWҲW/WnKӻ4㇟%CgՓ+gPP&@YjҸJK4hQӇwaGre4:7}R;¦ Dsβ`Έ kuΔmǵUZ~q7ŐYwnu x0KycխrZ0FOX摱$V2vVlCj椉gMjl{TZumS v2\\ɢI5"BWRvsLj+ڀAD"6T#+WVrv\ʶmBYpVh>O Hi=HR^ 0vU- ́Od'W.VÌ]S ](=vz0ń+W\X:\ZnBevz\ )ACDBZhpErł+R݉4WLƄ+t4BXpEjj;H%WG+%}֥7MqghƮHn}gT,3 +7^(VdBy yZH,ːLXÖy2!E!@^[ yFc]۸JKD '؂#FZgJ'iclRJ`hچ>fpEjEHTWFnMDB9|5j Hm}R t]tu]O* Hfm4]turlDBªhprZiR oiz~4wp:gUN9@{TYjx \WO5=g4(r}e*r/VY?SSr5(ʊI(!_p|vf>Gy?W_.د08Bl2ߨ8\w!g)eyPe(e34-fݖ`y(|8Kq!L2Ej8ˑd2vjJHYa~(CN.x@-59a2(Кx_lFE1αXⶄ[ɯޟ~ ~<-ʵ`| 9YuDy) vܸUfq21 _cȑ.j᳸jU=c=kl_ Vx^<̳LeZϭuk~ٽЂ{ _nz9:{;nN싔ݐN؍Z-Tu;2qzV8JVtJ];HdN]\ }@ntYj Ǘx_WhjPj=n\H]oRn;ݭ]rbӯg&?S?r|:O~]FaRZ5עA57@0s2b+x2I,}: ~˕C\C\C\C\Tbyl2'4 YƤ<8.W` )=r W00LT)zҼ+}ؿz kLP[27޻2?ZfLa)S+6+7 4]yrK' FwƧ6{yD7š N9iN'a[̰L>޺`TQd/tڊ|c^n`bj6Y*gvqd o^.k^fZ$U1 9΄}/?㓂mq>b[yOyRwʿw('lu|D: ж0GhK**{p52U5qOK ˥ތk%l4*pվm@Vo,Z0F?{O䶑0ihՑY"01񾌼PZKl%yc} v?j6UUygVfe4'pI;cZTzj;u(kǎ}l-"ZOD%- F<*q2K5ipdθr$-ȤG\L, 1R)jвvIÊCEVP0)J9`2'hKRgxW/&+ִVZ{12ʖi$,Ҹɀ˯þI;^1qx|ܚ̫wUM-Ck1dy,/kfěҁl/QB^ F6e$J֓*t7d) O&3340΁Gs ͌I| Rz1&u<7@W01a<1{-f!#RIFyPRLɉa=uԎk7 {j&5XȦA0bIM-%J7v>_IdFD\upVD.ÌBmͱ<cL`H2-άVƀB̹6X]N0͕KI4w@F)Bj茐3gQ-jTqpy(Ni.[ B7i5B1þ'y%R:!Iƒ9/2;>c%+Q@&S'ce#Zf`(Q  10c0NPJ*ɤ(*(`Ab&nBQE P6F&#$ÉKr^ی KX$*hBv(L슧H!=1&XS`||D6Ҟ̓RNH i3CmƏBVHKY $˜W1:e6I5Z{!4CaihQʲ$ =$/ĪD8qN -%@*ȨdLp΅xO(|pSM]տ{\xx l^i ^0f Gr-DBjxo^m^(E$OmъxPf?,"'.f(f)b M7- ĵۿILgqLa&QdRᇷ{īBo(2^c|:6Ok,S{3c+hs[r/$pꕱ"֔bcT6/cS%S .]ۃ:mmzhpr8Vhp疤ri>+%To믮RN6>~=g[S&$BLi׼F% ^k_+YbXv{#\-rNTP E;iӼEdqK(:cWO8MI]_;J֣x؎Lr\K0;q"da7W҉%7"X3rM @£ӏ\JHL8᪰nW|ϬGD lbXodh# S:L#aQA,.+٨bMIRi K,7;Fs%\(:h- wju:R\#5b!u(viup5t ,CjCW\{l&̡H-*:eM6sjì>ގR8Y,qҚ2–">g"i[!r3c=BnX5 U1 bVNj W%H.|~fAf(=f`cKz]6R P,mlq6mJ9i?Y#-z箈KK~TZ}VT^bjM%F<$@k͓I@2K)xzːO?~qUqЇYGi%bxAqH%r>p?4!_ɉc=~ySkrLv1ڗ溢egpWuLcwʁP{XO%\9}cTQtpՂTԑ?d=zSYL. h#bs2RSNu4g`~,&NcakAj4jqd`|Zl}~ SѰv-ri(C:D6LLQy~>k> #/cY^SXKULtw֪,pE}yܫ m[+Ls.#5ve -7kG%`I/?xl(oͺӿ$Ȋgs^z{lFE#@Oȡg F݇X!m^6`܇M!rP=Qz]ts \ 0Z{L:`~p]ca2ן'ɛ!Ҵ1iz)cPatE]hD@q\}u SǛ0yXex 羰7YSBS1ZLtp2 )ĩ@h 3rՖuOwWE6MfїQeM5LHLu(fœn?i<4wwI|T%%UWTfZ$- 7xU eSy"}2fX#uR: UhF=H{-ZO Ң'|  hNKhi \K W66?ʾ٘şsK~opWRHDG1O0b,g˨a@]vKeC~UQlqs[^*RF>xdE#3n%$&*V\S)8PU.:ejK͜;FPS m]Qȸ۷Mktτטc#ȴL.b|"ckھ䜖{lWEO% @j<_ghKDz<=0, EC wtzE,J}|MCˑgBg8\ қwRpNm.{Aߣ[iiZp){|Uğ#fsՅrXt(m,.c:„ nD#q捊.:ǚKD ٩d0*4H_n]v /3=߉-kBJ랕A'®NuߑbH[_|YBJ[*+<Ѫz>V%L얢&?1SHMb-Om]0uoYYƈ7bCvjmZ1q¥+Rb?W|Dn(#2=P0&Kجk3!VҧZN a/n,{PTMswI ' EXWtOO^~!w| xWVlzоmXG0o^> ~1z0YX4HXfXM/2$ pt Vb @i?kEDgA$j WۥmgMVY=:ҐPi%(tSBx!"p@s#wFԛ?HI%i:$BaWn&)4=&U#-gtg2=0s<؇O5nÃ,с=0So.e碆nҺ`)6Ap᜔m;,'H*)_g>Bݧ 5 DeYj vHe"M\j JkMҠgJH41m`2hA1=ULS|8pRE!{ґL)Z 4 FgӞvr{ݶ}N?kXת~><*^$ft;"ubL1%i\[n k/"e1qM[Zv#LvC5`-7CQuJ¶~O%a%5yʿwɘШ:qawI+k1G<`%W9S YkFJ*Y)0渻}VJcEsb1x㏎dh韁Cf%_/ڽ8y *>^wMaRu}?:Ŵ8܆4t~⪇APey2  /ٓ1]ܦZӯ1yiXKn$]A'0_Urm[eTb r]zEO}5n>\wOY`pcIXYelRI#'7צӵNKՇG]cKq^Dc+ K5#aU `YL(SNؼ%wX22|Lg a4XŶڲv%ʲJb=*8H&)!シ<~sLJCcأt'5Z4if)bks0ui5~D }Ż#e#0c 4NE1uLR)ь)IADK.nQ`"CqQS,f1&Ԇ^igRA蔖GʓZWmTRkh5oy01VsQk %$!b%:8󆝖:yԺ DZhc֤`uj5~Jnjx-1TL`=7)uƫyx33ަɆ1K赬jƘBPqߚ M ["$uX .{,Lgc-p@=)|;$ ' ̦g4:1kUpF:w)({§C0wb?Nz{n5 Z5t X9]h}qxgz v_s Ӝ\RpRy'f%T0:BTc3Vk5&bw87|BcMJ:ieO9 hݙDHMǾR C;לS&s΅uCB6 0l; LZcŎ(%s1%]3KA_{|{T'pkβT&]v43үLd -`0R+ɏj*ÏK~4"aE+AB\#;OPQݢ; ࣫N(ueG8"#Iy TGnnQ8yָڦ|)${ FŲBUlȥ0F;q%X$&aAn1vuj a]]7LAeֈbUt3>Тp-w)Q80ɍʾ8W0xfH85JB>K5aX^u: G`FR7Wv1p, ?ں Ǧ=cș# 1}E}ѻrrp=0ӞH΍ε \f 1"u] (e򢢃G?\i !!ArBQ i,x~U6k7`/&X(,2@α( @l0 Z}tz]gk3Z\B0hyu6g 5GQY_,zwJa ׀-e0\cC mß%j$mf? L&A)NB+# -XJ EAPa6c[/Ssdz릂h1jŐ!"cw 4 L!9}_ >cr0Trm*AބdC=v~"^^Mv钚\ FH0Ps#E-ɓjXZ"x]ע{u_v*+4`tlZX(JIS%B4-w0`U@u+D7Pk.IFʹ I#ͤHXCh4>|[AcHi&/j1 ;zfc+B*嵁cji vRU= xƔǶ)5jհùU ܪW f3.&cj~$]p:"))[0^ur ܛωO+FF89lAZT[FuoA #RqCm!6uG9m.A KXl 03eψbzgMV^Do> ,B At<oرQ񌣮O˿ɫc0m_#lìyqkb쐠6Yл:״  "e-atϽ4T i֤Cu3p[%?[j8 Ec x$&8GRV78>uںQe]?Sn { sCː@!+\i`iPLVTpW=ͬc5.QtWrܒ ':J*g}#zzU"Wm^nZ_W p-ZkMT$om](oV8myi핞 2Hg8O*vAsiݕ5ci*\?X !D9+Wh%)HiߍBij`tmr%],ˆߴ+̉ ݤfkoXU#*c{tھ9 QO-ݡP$UD.8TN8v+g0ܔ jh@@`{CawB dx x%֠PWj Δz;_<;btĽ^۔bZ\$ZnĔbbl4rUun)]nU_X"IS0XbbD(^?$kz6:N1 -{P[Cp,å# x g+o5] 3B>2$hdӆ>SgFW`ثR%S*xܨ+kt(FVP8(؈x%jcx%&V"q4nIEF)$y#m } (U+ـЧf3kPD}ُ=v^Ia Roy a ڻui ~:UX၂Ui Du$~G5qOIehԒE)Xx.fő@(tqCMEuop<twQz(FT w${XT(7 DX>ÇJ?_3}9_uKI`sG~[;63b{v&*r(l9K۲i/;f>Y6[e O*7,E@a}'( Tx^~6uuBb,Q͂`ueO7MV͛sJ*]8%"3wacSvb}.V%F_+m>D_NK F̿: )w[F)*LRs,/,I SRbp~DA'X@#qb)aAd(eY>¾0dÉB%qvPX%I 6wUy$GZ1^}wL}=[n mXUĂ:p,D/-g%nXRI-ѿ)&b sA v5cL* Dz ZX粔JR\RXB qF"nވ A "3^bv0B!дZ>|fUsR/">NJ@)/WtUfc,@f C>YyW.7?~q=mW6wQb V؏*hd0(SCɔ*5)W Lehvw *p^@2_5!=G13- Yׁ>nlƾms}[:׮v沬 z?\Ep%\_uXvL9?8ܥXL1NRX@}Y:~.7EQAx B[u@"d@u3Ak\Glٯ6/K)eb;q&ue +* \d^?>T@o>dkN/QEs7vocqm9X,u\c^upag{T9Dn*(W ~(!!Ra@S JdvJť5c0_FS&/ZV<v&Zk@l8@Qa7vU1%|1NZ\_mO] [tAihܿ2!x B$ ԍ$F7"d \>F„*KOgwx]K8|eބ .3 @ 9D>&& iKpiu!QBd%"nR*kYPI̡, u7WK+r_ힶ7k6ʺ:I!X~޻,teL r$H\0 XJaϻrTR%(1JH399"fCiKFCMVļttWML 5`p^Gl..w#[/eJGD3~ GXL4շ3be™ڠ$=o7: =pԅN Q),(bk-JLaPKB4l EO`锕: ^hvF!bv`*55,=ywۮ2im9UHB-W0P3ԬT2 (/NlLKTe1;"+5SV[Zv>x-1IDm8 jԪJL1ADĊ6l=qL_؝SX$v{Ir`; {;J򓹯v=$t{βxEA\> ef&QgU+h٨ -TtnLQ7PVm|yAVܺJSPI\_t:ܼ@nUq48WM!BȜ)1O81&&Dǧ]sEB:hO NH{8- кt xMU-\ei7)[{Ki  A!Ь;3DɈzhSF6)2Rf%$Ϝނ噌JpiV sE!$f\7=g̑T08=WLv9x (T=YŽR`2.Ugf'L 8;{zw#mE)I}Q>\|F󕅕:5&R4sRzD"\ኟr(**JeuQ~5O(QY{^fK2sŲjۍڑ/lc,\2oOn\""1EoUf2!p;]3ς!HWf%&ǢnGW~\t BqmP}D!TW(l#j֕e S'C5LI\Ô5LI\bWA0F" cJ4s$D"LWS.CՁ*Q1$Oq/\n5'y->5S[&Ң?/d} T 8dt[pC[(uQi'H v2&'Qm!'d65gzTe2 hLUflgF/N8/dqF $<|MeI<12ĄIw7ź$XO(` ĕQ6ˁ9yZ༬NwYjJ|#l׆t,TXQ,0LF<)5h?b ; 9Z8d6k*EX;ϝ7~+ZIg<ԭb<ѓugTRgQ]6vHdw96n6oM9+T1_-%4EoPHv_vrF4xΟzyi>[G?e7}].IJ 'x6=N yra3Ff=<*H̻Ez?QʴTm^Ye>/RUVwWcBB/yqBQ -7mפ"=:S;<`I4~~.La) (f5'`Ű5^r= pPDe{ҤϙֳĒi @\+kFr -,u(;-.Fl5蝐u:!#TzŠY,8/2,.ŭLҍf TYׅ2s0ݨ"Sb;Jo/+-zfdJgq83gﮨ&0z +}sy,eJpS=Ln47T4fL(kKK[}`ymzkdDiQrwtr_ހG^ 9j}4FېJ($% 9l^zqErV5=ajJ4/"`O9֐݅`!*xh68?Gcm*jɢLm^r2 ލ'#^҅Yդ ٧B4 P-O@_V [T(7:z婓r -*HBYg0wl[x@ z^& Ʋc˜`vYfZXfUn=+oY:SkJrʿ<K+ԶlQ +}gǞ}Hq3{aNޞ1γ|oLvh|KTe=cQ-}мNb̺`Qd*I_@]plg=c䍵[M3b.˹ܶ3ZmuE`nT~3{<6OT|.eRngd;Ɏw_P-ɯ= \=(O 3+ho9)NO6r~ɒ|/KPr"Enw}Uk\rjɑ7֌ZMw,̮z5Z!bO/ǷR+2Cq嗩D_=Dߕ?+6ܽΜ-paɧ)1 jky T>|}$$ӁU "o?p>}h&E^ǬAc郖 kO< Zp2IHىZ BWP-W$dZUBe TQkBEF `X/]_TUi>{빏{7.E)? '/M{mo!d)'Mzz߇jS1z.*8 [=qow(\=kc#J*SDْyB9D 'g1P`7]πiE=ZY,pce`=ZjDP#aJY=F($r>$lH m5f۫~Rǻ )/" ;(/2E9ޞ1D";wrRPҠ%3Âi[k9ꁤqnmȣsg.NPU'-zkITR}t n`6ϚuWu UJ]_`sZ6Q yH/^d۞KhLTBz/6t2Aw63A9mYC kfGS(vhl@@a!F~XCY,~?][O$ɱ+s,#ʳ,Ñ5+4 00Ȧn}؁q"bW{>Qȸ0rmd/?x]2XJL:1d4ۋ(UB9d4}3\ˌJRl#C핗n#W([mW`vR DU%ٟ<`U}V_Pz[/ՇԒ5׫9 yI,7B9 9ٞ5yU3W܃|=3\oV_Y]\2~0h qPcƦ$ ǭ>D3(I=_Q+ӢQd/o)K i/?]~;,Rfv[2eP+Fn$uY88|<*Q8mB2>ˋIPT(\@L00$[jt?Z:YZ1Hv(ѷ5aNycGDTgXK#;T$Y#0׿Ot84b*6Lϔ 알; .D[ï8׷4.t|.0j:#8%9K.Ui/A&X즁\jYdZ`_Ju-VZSjAB)_% WAX z&EDoFl\{+!O;y4EݑgqO>5vFjy98s_! ]H'WyH4DMNCEr#񥰈CFˆ fhU384 #jPo| sS:] m%UlQ% z`N'xֱKqx̨Y˷ R 9ѩmh[Ψ IF7˵9;[TmE cu:*ǘݏ[CF;tJͫW~QM|OlEZ2֙&Xk}D֙_/Һ/>_r#Wu2Ȁp|l[rue~ #зxZO>KOdM>d:1y*ğ2+y0ʁKg$AKe5O;i-/ 5$V`^bPt?'ah/s`Μ>f/}c6EHޢJbR2@deEAal%^2;u괬]Uz s )%ἓ@1͒kZџk,7/@^4e)gb2epï7ǝVZ.ͻz I `ݰ\v'8Ԛ?1cv6vx4_t豌k)cBFLt9f"< Nk9 I EcۼԂD-9<>>ޢ(dhwœ35]cQk7Hk8W)C=JRQ3c .]TVSoRڽ`4/oV0[r#l+vŠkg{@p6֋GyrCⱐnUGi9ɿmưZz1,"z8:t,R:ߴ^n ˺8[>aG%$G\iOrzgV. \Wknc~615!=MUPtщI $`XqY[]!˓РUaDs؂z\Rw4O?[uk9߿*=ߛefWλ ]In0?`kAf'%|NOɅ$CtSVuH'$f}\P Vb&CJT2\ĝhu5`GC̱H:f=uu|#M{y6Z93^]~;J*¯ىsS̈́ Ԉ+ ?" w/H0A_A{LPO-QwyzӮ-hpnv˓N&33zd^͏̴o=2[9Aq([渗v5wcBg:}7.VqL } exZ%żitn%l|̺C&QkPdtVmWF$UgcEz3.=Cmw.xm6w+07jaakfpOw»4:i] 1̝]k!B]*[3HIrV-`0`W!\v\t>dMs+-%#p,;tm+@|)dəZ !wG퍣s@^Gζc]odN/U3[ SٗsrS-e{NKn:b}0/(y:i4e #;aYt͌D@'׫7h |}ʃ9KfQoިjnj/PvX^˗1-sғުʱj+ixCrMjs&5O77anwGA3.I"IGyKd%c %II?o5J*SbIYoNe*V^ě-y?e#nGՌܟ!aQsYΊ3 HeVf(]VƫsP`9~4fXV E-v.)T8BFc5|u<N%_,6J߽@6B Qz@:D0ήx%&H/R9H7䆋66Hh(JVϓb4DK 8ZI2EBxmvrtvMƴv˦Fce2-cZ'ۓ&[u1$b]F:xh{,ڃHXD .JKͭD3' u mʟ\aF̘ ?OE30@Jn.sTP&IƋVC47Z :B ,ݧKi )]gv S$L]+B8Nӳ^HFVfZxnd*֔c#owHዅ=MndѶ>>zr)J3vY?ط:oΕ((" Ij3͢΁x6qynu,M.AU"9jmF[﬘O Յ6N˾kPj򃷿5}hJ.!I ^IE[Ru 07WͰc;# <9jҐ-9?z-d"5vT+:@K@ZHzG%P!Dksƾv:^6% 81eMSx0ƽ`nG-irWv9Jb7f9OQ.Օқ]ptm/ Ko*Q K?]_(#ay"#Y-KR3](B9T+Q P6" 鉢DT!i#3;[F[>uV;nCe_g|KH#ғ5n#rG؈{T9>T{1e?h:Y7`9!~W)^wR7VFѝwbŅkzrzx˗?M.EDitJ>\zxP`R+w\r+>-hf:"%)1>X90Rf@a;,-n`ހ#)Hz 9!20{5lw/!&/'MTL<_K:LDF`lm8){(E#ͥf}jxb }bJwY+ZyR S[@Amzh.7tE_G[t29Rrȸ+!%:*Ŕ+t@{찓EKhE$>Vw6^,P,"PC*?#6UeV ?ҡ $"mJ^ $Y1Q:gk<߆AOV䕐Xcu;'4V/LJ!k֫HF1+MJH)+n:2=VZۯRLu)Cb'$?$Vw,^]NKEXn+-s2rY`$SUy+#ʢR,,)PKIHȄ2&ጂt2i^8Hf|$ $~IwQF`܂UeD/7rtHva:mtP$<4W $#3dL$ ܧ9dE"%\t=oq^zvΓ?04dwHb)Lj&BadhS’ UR_4'<+2Hy̒U9"|R% EHUikۉBzHDb(NXrᒵN8.K:Kez#ɍ_k`<臅m, x!H*K#cK%U%e3K-uxd0F_p@1T=P6N6Z'eB^ؿ]~@M%  &T֨KGe&YOrz{֓ܳ{7.eBoX~&6v9 2sY[A:ݪJU@E(c(x0BCo 'b MTBTV\U =w" {dPB*VCFc(T =S5Ju 8ZcEow*p=:9[rKo(ZA!~;VѡZ̎kRUflDKUrNW1-=Z'"s*_^O+f/rTt0ɉPzijiMɲ\P/'G(֥R&B BuMvE(@`D^Pq8Z!N.cU @VbIq-FDΤi-p*jWa@:*cڼg| p7m w~1$];Cv ղdv(kp悥Y \=ny BuFk#LȎc#͚ 9tx)7\QT?fi Ԓ;ū;V@\}EX.}!eS5l@/]Y%L)PzNCj<D$YHÇ( uܐP׺xJ{隽&">uؘy9n*஠̰BH ݁l M"4P@y"MUE|%)@pi}%Ȗ1n.xU4RGH;{y7g6sK&@G-G~>"2Q޷TN 'Z pbeWE26xl$ 8MhݳsziZnZ",=UFMYDO,`Q΀YMu,2~3`O"d#g/7W}u!Јv`U0fOF6M}[#u_xܔ t}X3,~ŷ /͖S1p҅ gS L6]ʄj6_4Yw'6廰!oYy(be r "| .dO y^+zKhta9~Mf1.o6ю3,`OO 7-UZ\TS'LXPV o' |vR;Y&YB93vGb rnkdss9yz$Z<ْNQE-*fJKz|`\ ql2j)Fo2֓{o.yMޜ<ͰLe5ٽY糨$*AS/i5Jkh<Geur*wzC|5ncTF%\)zF'JJdF2ƢJckU~H/t Ǥcf:iՄW~PpG#Z}qmXs_W9ۡ b^jM  b(d 3'7teG̻D s,sxRi@֜Ɠvf!(b$*+zG\ڱJIh%~^MjűKXe 3jFmI$I.*J6ꕮksIQAr'lmrG|4;?s,E~qc1mn֌`6Y.Z/$_:eꁱ#k91݄tB". R r7wml 1^p"+ޒ<:ʠdUzk:nB$6I|uHϝlA!l+^\`NѥDiS`{csBU&Uʨݢֿ+qXпǟV9 q)fS+9|AMu7~9+F:0nqU@ PU,vCͩ1!z",WJM^`5EE[ۆDن&$A 6ch%Iqmr/?ÄLGMSnqդw)|%(#K}bZd<[72QVf7HYOQhǫ,VhM/M f6^Y gȦRC^C4Gq򏟛^dK[ r+kF+.awEQh2]_آOH1erNh?C KfZ%ܠ߅_| e)61ľ)5%\ԛ7lg7,w˳d}p/5_Lf:7S\i2>$?> pN*JRc@(31iіU-zΩyK"ԤBhH'C`NQo+ϯS,28gowEѧ -{X@AQ]6U6b(~+|-|T\O2 y//>;"ھyFFje7KY{ǯ߿0lQt -m%,h__y3+>GYk(MYk1O_A摌_}?~>dW HZd hTQv_9wx| XxNj/j4Z2* kTp^0ɩ+uGΰKoiigWA9YTjk9oFUUs'[$F)ءiFVuZeJ#-k ؖUZQYQ+ :=1>EqԦkPMT7x[qeAKDCh]B%GdT%}pI`T^UDya2,w?\^jLpģ"ZP]JYOI g:F MhEC^$µkZXEJhkJ5*ηpuF&OtDh>ג8!W'* ]uY(ھ)*$D?kiBJ˦{KuL:Ie)͛^TdD꯾0٪\9krvI& )CFf`wl3CL'##<wTcs<|CFh|0nބ-[x|J|>N[q R̀k=m98CݐoBٞ޷/"mwurBI'gSۥoɓyҌcO ]"I.j}m)W'jTeQJj"x1}s[j/*0^Dc@9Ƹ雡h W{ hT-D=⃗h ; (̀ߙJYp`!;V=IW{*OȬPip!rfqUfÉ[.LCkd*ihlq/APAoԙͷeڱ`<Ѷ U"YmWN c<mI.$ v9;׃0kOձ8Nc?fuL/+'1C,0NE8U2:D UcҁK]] پn}NM*﷌ =w_Wc n7\. >X.崖ٍ ોPcoȍ, #eC!w%[Ruw5Y$ TW)ƻ,nr0%յknvgXEm.zo:p]§|VQ= ZQBb h !$kH 0~($@vHg/ߩT8[Y !!D+5A<6A?$FS`y0 ]3m pQ BaJiFluA|O@]7=^.I))1aAʖNXo$P,7~ƢɃv϶ƨ[uk&p[p CZ_, 2򖜘KkE_t㽄o2$te!ƣBD|S ԲIw )Pc#-w0},w Tv9( $*It }^!!Qt(Leû,;kAѐ3q~=$SڜWUdRM1@Wyü<Ze3J"L:Z˔Xr`cg?0P9/|&f}NXs6'T!CqɍsEdT`dU0@7H"3CYc6l~Ys$GCH-odn6n/"#ȌCLNE~-+7Ю;]kf@ѝ:{x+od/%tf;?XTm\e^aTΣ,CK..̶ҷd/1Y6XR, .홼Jjɖ]e0}~mȎsJ :,6qf5\)v+߇Yo"ѩ3 vS._k$Z9^ۿ{Y,dɚ֜5!b1IZl=r,֛GVԷcSl5p(xre>]lQ:2Kɲ]tRqw֩bk>LMJ]ka6⮧SPjjwIKO^WޮXTy)Qeb:&wIi5Si?_^{9yZURJTcjڠaV{L9~O1Šl3tjœ03} 8}yW_U:% DG5 'ycV;9j<4uMIp -Ae@NT)Y7 S-Sٷ)o[ڌf46ͭ'a=,"듕mkUj74"h LP3 )L_e]/! j%Xl9Z<<+0u e MH:J8]#LjMCS5%MN9ҋkZz).-)`%d|kj]fNON+53&>FK (93 5z,$Hbƺ Rj7G H)Sp9/ς4sGevF٘*vJo9$*Ꙕ΃M!@xުZ &Ymކ^*,l%)쩴c v4I3j ؒ1 y'mqHKmϗV۳C|dL?g#Y7PFnGW ˘I=glq$`wGs0E:Pr !0afE}1"-s ząbX}^F;1!⠥u +*^Cxx+1,P>mLKX>ˬ}XsO7f ;l;1iU/I)tZ.#ࡣ LxȔ_g*^oxx&DC۾EoPp%:ƓZq͋!.< Gyͥfӈṕл{D||> g3v+v>抝bs;[8tBP2Sz Vصl {.#;vE[rqKlB%(*֊<z]VKlӾSu^G'$A'yodf*nk2 2HBܮLK>9-N R_=j9%sJ>u wd)ԍHGY}"=0Ԑ'2=bv+y8uudHg͊t~ ֍ GLʚpC0OWy̌v.x\ltѷ !&XF-[VQGa<_YDmջ4*W#Ǽ \ Uc鬺+v1? ~ko@ӿQ~|uKkͻW7MBǫ3;"7o߼=w~ގ*FF ;:F {ն~[$wS*"ֱ>{;| ?<6\5[AdPXi\mGW%U*gl9/i~юcz>Ư_]׎\Tl"eU=Vҳ͹F Вw\$hkPE.-Nx~,{On:~IȞ?ߟpLtL|=)" !jea1!D"᛼<|jrplun.qAb0CPh ] s-XX<]4wyDݫČ\ȋZOȻkI!9hН2& uH>'NBlŵ'С2͹n8 _O?яOxvt)0}R"ؽc`b]ƿvvk=dv~ah&9 Heq-tSy̗-OK_ "]l-Aݛ!cLKe?3ܲ8# .$B/;bwJyHRSGw|=ӋSpp^E5wH q v'WG~nBYr$;O졍g0D$e|=:6 d18$hc{$wXu;8 7^8=@c鷴ǁI4ߥ?mIeRGߺ 1*}_sgkjNQLsZ1׬bT-\ɲe4zgSFi]18窘X75 d]T|.t(F*(J[d|vsڭCLfd j$eJLgezdc`^\B!dDWJDjcQsƚfH+T[XsFn_R1o鮐C!α+YXF$qCȨ>S 4JH0νTr`%KYޔT_4dVd|hkF3_3m{gYEbI_mVXƾꍷRL[6rr%#-sUKbmjUGU7 6]D.XɘC|\i[%g_>m S%9TM챚l!!{=SQ9Μ@M0՗hkR(bc%qkCkf@̒ ?z#@0U P(#QiY{&+Hv ‹I?کl?Û~x~m=0_9{p`K?l0=I(ZL>\TjӸbNMWP6i0Ž!K I2s- ח܋I?94 EQS&7s3Jo'i_j$',ַB=0=BXKqC "ֻ>:~b6bʧ#G tWTxn S{q?"")ŭI 5n!B{O%T٤SV:bV?6y;"Xeʉ;KuZv G, " ]n̋~9LwItG!$g)`W‚Ôv >a %k-ʞ]џ]h 9{:M;iCg[c^ӭ87B=!O295ӝ~,zQ9?Rq-H9̰z7xϙ7!X߈7x{%rWƑvV?߸ ~W9㮀; <5?gibݴ1+w¾%]d淒n^; xwOK#_ zdZɭ9ٻ`rRCSrESBlOCMg0bdW}(mdd',jy:\a0+9b:u!bTl\ }Yutnuv Tvat]ӵb-!9 _`P*t_F@6v1Qd 9ɷH@ES|vs1G*?wљ-[/:=6Mo3V&/wsG!Ydgm%HӇT\.PYUi >$Bɘ%\I ݉t'ܭ`ccȖBƵ4yoP^{RYmns}s#oI`@5nY бߖ|mS*9K(.wTI|K6&TMH/QaW-dCKY#[pr ~ Ik-et8HrѸ8w.wz^ѩUM֖mE*"G\ S6ElLmmDƷ1$.PcΥzm)jKu=#bXeY glnoN. 1ȞDyTps%~xwom m6n1S4Ѯ}Pa ?ʺqxՂ  ovLv)"w2ɗ7 etfoHÅBZh)wHb)[Le)&j^fT-Q>[r% $2kc>0f8,;Ǽ!z!gƭD`^t>F}[F"\to7&lGV̻ގ %CnX .Z))G˦fnKJ+%o:ݥR[mMB-CR%{T1$c>Wȸq2>g=F<ÇFt5 O';!<Fo*ċ8j\+z۲M gqњFҦFT]Th쎱M*.ْVnYrH`Q K؛‡t׸% EWxɴ.d|QkiJ_JbVm/};hIݼ'}u.оQGNɳ,X~99h˙M*-Igi޽<a2IfDssl^Wp5I餡 A2;> gפW}TQ%Vnj[E=Z)^EfّѾ$3<#nȀf"mZ੸1"uB ;'eK֯K ɦgn(ع`!UgZ$Ri1#,QX}2 S>!Vm'Z~nqRm+N[t8]DCaHvaT5C CD욜%gn/~/pc^[Xp$FGq#uOA-BK L:e>vQo~0)H)g-U\†3 l,dtZɒ¢QtKRRFu#alF?;( bHZY#x O5ў Ib4&n9м2 _~{+(z?&N:k^ݛmE`7db ޤQTDk%D[!Z4 Ie Rab5VKp ֋YV<b ]bFN'Ó{I>Gv 00Q}VƜt~̋-L6̕}S;cZܭePoBBĵsΤ⚝U*-J =fg7}ewObTtop耽r`][s[7+nƵ/Jafva2T e;{AQER<,K'vʒ4_haF>"t-> &@g{Qdb(ŷ,n,l͵@5Aycjs \o,(X!(L`zXɃ P[M59zc׃ }͜[$>֮͐?hԍI5+faT=Ym^FFՍΏZ9~BR˘8ՕсB^y:VWJ҃ =G`{%Ɂ>\Ä  V7\fx,ꋥ/ybq)L*_mRQʐL4PBh\L-Mʘ3sk5)W5 #mAѬͶmaD^X`_uxb4£X;|g4eZ8C$^H;D5jd4iZ2l%9gXy_][ijj9eNog-&Z]S^h6('.jHQ4ᓹEDŰIdRtiXN%1κt-؄Sr1yEk?.7$wP~00>o7/5!*V甏.ԌUqj 8%TCc. Yg-Hٓ)+EEa_TDYJs0Hʸu~e}7bx)P|/{0H^IrOOL8,&^v~ɨL#A|)NXXmٚ9N}B<ǽfg*$ΌU;/eF=<1U\!,ʹd]' ]uEq,UPlZwh"9mm $X+Qa"F&wUX ^GpJ} 1[e{qq?*7gKn- FY}SSf`a -P1:ʱIGUup2d ./yjR)5BNY⇻Ȱ{޹^eƳ߻^/ZVRKXyf-Ð(@T_pC8(\!0XMJNYXf|sSMNxYRWtrt7]|I'}%Wm _byn#i#גܾhaa&h[3̄t<ۈзZ16'g&3 2f Kjtl'a 6h&`%9ЧaNں-p4f.rJ߾M%aQ܊J:xemH-ZeTŖ\RH`4fϭXd>Ӹ5 tZC2%IG6%WuC;Y;ױ7caֱ8!;|_IrO :3đZ fp-Ӟ=cn9v N[kDtϘw;J0ispA/\v⺡'X̗G#4s)%`a A]B3_aDzdxzy]?jXx`ts`&8HY ][3 obgBR),W<Јa&3! }}]j'AfjwV6`N*Π ՎgjS[9Q;7&%998S=kU$#eYɓYè[5N̤k 3B \(R|)v8cSTٮݮxƯBabk"Vq&˨ v."Wq&.j>-ZXŊqeMLDR`REp/%Ԛ>'kKC\]$mUZh)Ы)A~Y*UI- 5/?Bвo ]h%g0TBbYfK/CgR2ߜEz,-5wsĬYN?>ŗ %m뛛%'xnw>JJpR,$%)AZ34l;ZY9G[m'U1X/SVVHXd+8¢lGvIBvw".Q9뼨@;ճ[q HBuI"F؀U9ڄטr52M@5ޡ0j(ۖJ 5iBE w(Uڊl@ADV)^On4!B։fa Ɛ%Y(vLK1؈1nT/JIN7D?ŪO WNUvk"S½^y I;,!)XD#2'U$8SM,\4 ڌ^Kl(sɑTQBLنXy=OwKs¨CKF~ULo֎tdYeAY/]jnEHU@9Kfmc%Y|;Nv N,Ĩ#Mt'_WȺuX{77C\M%!R׬xΖx~EYT EIzDF]~~7o OKhɔМ|IQb &0Q9.+)XKAzoD]`UxJQK^LmՒ^hzmk7nf?]^V;;yKٓ{w'/ʛxq߿ltފkrE~YnO4769:looj񗾿-Fy<}w d*^7W??.6j?"jf3V2"Ӵpkl 1h@ɼ^/קO?R}r@p rEcap(x۶;E L%&XFb51I?K׈*ZpԢV&50lH>|wu :㷼[MV.Mld~9aK1L{ʹ}:3R :$0sʴo„ ʿ˖0̆ͩ_k35!6ZFEC $앱K=L|2Pv WQ7ƌcTC|V+I7 jE\jq !΁!Ⱥ˂~\j9OzqTi,3m)fLћM$^s y2k<9G<mם-00B _KrO!{Mi-m~i(vxӤDk4d7T3'i>Xgĸg?,asGJ9 q}3 J7gOx́]L`i]y2Li7im[ޯkX P}kJcg)(rf%t jS{9N2qHDnT}2[W>v3C[R!ۢkciT57YJG~^]8<&1zE+j-g|lqxpQlq8\ I/'tDŽwEOs'(÷>| .a G?[A{`2Qg15,Hϴ_/Zh9'wvVRxejVU""WXҹ9pEӢWz"r-oboKGyk\9PԼD1ND[Ub&\D]Es3U'ɂ l4 VH~FO%67yL͂,yG 0ǯy[HXpF_$QAH5SӨ3V`'b$ƭ۟L#6g y) K*c0!e~{ݽwCչgCTЅ!z`%y?b9^r4K~oeGv}%|_lpx<#QpMfQ!ۗw $=H@zi@ⱟ$Q؄?y]Qſw"麊v k~U~C.eXgQK{Qs4==+$7<;k}4O3ZQ,.wJ֌h!HV4AXXprB1 {C2ZDۼ en9I[-)af4/pA8?W`nozlrE;b /;s-|MIX%Gv0U6jO_c|J3-.Q tOaawO7P»Gimm\rW|>~[\yԁ~o/ָS8wѰ7 EvZĻ}:fV]Nց;˻#Ƒ^H@UqU6đ~06"GX-9ZqXfCRq+ ?`&>>na&ŻH<g9<RF*MiHdk_;Cd.M8\֥#A.N%gTCUݿ8w R΁|\ZpQ>|x ;>j=|m̜Ӆʉbvsc][>s%p:VkTS1؝#մզrɇd4( _ͦXi)Se[:Sb92CW{DCr-ϥT7v}Ij`DiUI$\.QY PTz Wf\\mYO:%Ӝ8qik 1.(Λ5m1ok bԢSJP̙V%--Yee`>oȿL/w 3g`8\T]H+/hχ: i4`8ˉS2zk|7IemȷQ*a*D !A ?)K~Md|s}i%ayXjZ+RX *.(m؄H(#hAѰy2bkAOw?|L!iX͟o7r$5t;s`ƱV.J࿐s22Щ9D\`{E1M4B)5bM@ǯ>L^O9^rm?Jc'n4M QobҞ :4H5$c&y<ϔRf1X.e>9&'7z^FiK":ms<5}.{`>ZX2JĈބ2֜ƛ>ڲ@x$٠CKcx0i>Z+e2XMY+9Mk98A N;bUF˷ l5)Jx 8Uڤ*Lp= !oMgJR/RxےX$f O)QsjgH(MK0f 8٤s̮ݻ ILJ)--nRJJ"ݺS!V]B`\qxV %ORa߽`h2)u<.LEXxU T")52\((P 4QUe,^:N\&Ӟҟsq/ɞo왙]k3?|索-y.$?L]#LGw?Ke*/Gk<%ZςU,W]R]-9t6תiۄRy;%b9bNu8x鈧"҂. _c7nvEˏn^d!o3ϟx>,(dc_1mמpk,E$>P});D&_2[kũQC"ġ)SXJ+ %V2%sXYѥD%:ZȪ%FeU{U-hVjø)a3f5Ba4!2{w}XG|73Vt\K`fw1\Xw4<&=3:[Q>E>O}Y&/ 쭔Ǜ??oI\A©% SBa(ٟnM+|c(RĂS8E n>єHpX nrn](xȩEECuhp,>J9au)r@|d]9iX\S\P 8}NsڬZpZ !&bkIBnXu!se);B,f6o"sCFDD cn',aWTjM1="& gG'TĨ@4Nczg(2"-OI9Chhݚs?#Dq錰Jva݅C;i`.A;KuBXiBO-$B@;xi cE5' sf{+s?zG}{F& aqq8SJF,عP/[y¤r/ަjԋ_CPpCE[\JO56]G!XZejvJ~i@V.A;yϬP%:OS BͪQcn1O٘'hSu4yڏwV]= <:yP8r*Hƙ"T0|3 'ZyTߌ#˿BN&@fmXvA) I8}MJKfiu"GUmSwRH18 u2r.I*%G#gH)I!5HEC ڒ)2 P3%vu|iWC.}I׈h\'Ԕ#ٚviy`*tg30RXpW('[piH<6};%<_ x|-6caǼyZ/5Oihbx35f!3SwgՄU0ra^թ 1 e_kհZݺ G&`M82A܁/WTRkqܺ !|.ǭ7P>~yn'7h*5|x@MAhwPQNnqFSd1"V%xsᰱK BN[ǤG=׊[Q $AoRžUT۴Btol=42}6iB4:c6bԿd7>SK4X͸DlB4)]p!(0C6:e4 l DEňiò\~i&"=!u$; 8ю 2 P2H  72څS&QD6EpMM&{HZ# Σ: 0G#v_ݖq0hCBNͿq1p'ɬ2tTj=ԙB e,OSS /)F|P傴Lxx9 Hٷ6B)!y$P2.:fA XhNi(ёPRwt1mm; $WXFpR(=URAHcAak('\D!2ś#6 @Hb?JCmN`UILl]7mpJfC{ ?t5"?v@9ۦfs6>+k|_ 6wҤ6!e hi0 61lMP >Vyy`9 =hyri2>6 鈵JZ3F"l3N<(1I R RcV`Sg6dxDr[m@coReu:p_{q*+ Pol9gWwLk ]_uIM(5yj}t$ ϰeweߠRJ7d:s +.-tOSo&g'3UΐK \BSjMF&Y¬[4}Ss4y&aAwWTbD硭!,V'›8h7@D &0| vCkO bhEMhh3`,AUۦ|iB"l’*&1k MU28ۨ@hC(9;exf% NR )YgL ͹jam 1u~l]3;8*L?:;Z`LVgT<Ú=cP#Jp?Xb@ =C4^sI$|&a6,V1 xr$Xӵ2Oc"m,bO0(ՠOmZ3HP 8%(1`!H* Xfhko =q3T KxIf(9GyDzvGk>&%Y twKE'^8K7ItT7MJn:ț|4tar0Y7{ob?/7˷mM zdPW?YĬ>ŢN5&Bj7+^}SuOJvDY4}YǺ L)άc cۉOm<1AFG+Y|Zyjo}s@3G9f~EgSLawj%tiO_t:CM ܄ba-!/. ΀* "=X 9(W^QԐ O V)*$B-0\:yZ+Qk^d&nToA܁}Z,j;pY`yi5Vv)(VS<_jwpM3Ki:)3kQhJuMtOy] (a}K JM.N]pxQC4wi&(NY?#áE~R@zJs"m:= <m-ctfWdEj,#pnzogW[\Cѕ cf忳O65Pf:nH G&}uo$ifj0h%oOo츛~ RF1p?yvft߇4o 4315ZsQ44)I'Kg z&4)gn]wXqٞx\j*Gcއ+/d~cnGKw5B 8KBylj@GnJُ7fׇb"PdA#ǜyb`{ΉYwl&25|}ϷodaiEDRc_V'NT F\JZoJ0dͳY'3?L8_Nj°‡OAO/Ö9kw.8DzwgVojm. bflZxO8 l~}g:ܩ U@a<ԙ:Rg^3OyX}0`5jrvF5X4T4RQ M6^EUO诋"#Q,ۇM_G g0R(I `K4g׻_;kɩz՗KA#˕&w.]nSKAciS6g3B4RdU'D S D SbhI!`MZnD(F*xŒ.Jf04̂'9Zdb$=!9\J PG\sphFlrY+ٌKTږx0 rF;Rk G+;)li:kQFxt;=h 16'gt" 't.A?EN˗nJ8́;+s sEH$ǠuHfm<6U%1[lcPd\ATu}X8q"ZIF%U|N8!Dj&P' {:UƮG R_q@xgFX爈c H9Dڸ"5Lɶ@%d6k2@5ddX 4RvsD|/{| JȭG&;!p4(]CpaF"QM_Ԩ wge{ꝳoRyYճ3$%"Bp84Ra* Ti+ rW48󜦒<.!Jjk\{uqL#Pٚ%yP5߱)2jBYt~B(\=nZ#nXZp% 4i湢imB"&TDb6AFb">+5vON$fmlT2cQU{@,s.j"dT>%>1c;H A7@!@lHc(A~*P#|b{6QKU;..rRQe@n#D@ !!蘵MG(18'1~M}:W$/Jf?4F_PO+'!$*.,|Mhh! +FLT$IccR oUpH*,H!69b&гXvG+kfU5Vt*?6> CTjLaVqp 5(ҭ|ݳЮwt,_Su0aHU"N4|۴WCQVp*Oic+&-fr|lI'2]tp&3w uħok&jDM6 w$fQy->my?{WƑ_]A`g8uJ(Je) jRbuER蠺~YPi?3aSKR1]#g~snx>Žr7W&L]+j֯4mNϩng(Em{'69^(__B'hy=dS*L8xK4&60'!]ģϨ)^ ‰B\2ap.cpD|_Ufs5_]Hθ3$aC;\ G$c`'l>"Dt`‚e`9NfWfs\I$Nn `} KMu@h<ULpјĔ}V*"f;vW 3.X$tF ի`uF>8T&T`ne[,?@Rym!/3o*L0@B1"6!!:U<Đ@Na'D+aV6k~1c{wtD DŽ[oeI42,<',68g- ۄe*s nUOT!v*cg MxHs`rJ=˰X}ӳƩIK9W)#J(}9JYGWa?~`/Yq|Cv1ΥUiGx}^ϓ9l-|+xN*7"ҍ|(g4VRkHRaP8h qF _/aߛ78;?yDAB,Wу@AH9pމ~H7w Qn" ݈ O?)͎rUbj%ةZEN.%Vi⠋ CQd(rQp8hO Dj_=L,wiGUpApP: Wt7a6?nߌrRbInCb8O')~ GlQ6xp#*,$ A_7ߘ]}`ay{]e9 )΀p֘=zU/r[?0~:spOa(n%`ꔀR΁$C ;˰8ck"6'<; H82ܴgͧ"YiMj=H8{ud0z8wIz8Xzib{ori7w&p55lOa'uT>@ ѿ„bG;)͐Q&8AB&H8x({"ZwJ#@A+hg28l4cZVvmv5˞9z_i"Id K:H%MR>9O:fӏDzEϒTaE<BMh5j5 ulr luݪG6!lAnLNA8Eoר %@kzܑ2y[!!D7cb ¾w\bcIVag#)Q]t %WبjwÐpD#NAN 6[)hMK3%,ISwSRI>@1@t~(*g/>h,P:>zyQ:Ҏ =޸{W@OpTFⅎc%iUIT[P=4(KETX,׭aS;q:{<=.4s[o<\VGls,6WCnZUgyi<]H2Ibp|>9}+o%lEswV2fiš.>)„|"+4{&(&Fy6cD^8XT]gז=h)ɏy6XJg矦Xl" KoV w .~}s,b8;Î}.W܅t< $00(xr6sṴGf,s޶1~yڍQ^>`kFyh:Q8Ppޣ:GAŞG-Js퓠/ƭ]Pv@, u9>_"isWOm(oU-{`bc""HjvP.?|C8U;U0JUV;E0kD0mDxmt5nkP/',% :f ey8! /D Of`iLQ+6 ."kY|crNE[ƫgX] 4='K">'6R#,Z=BJXam F8u1}_M/<6]L{WE =G 7]|EczIaF䌒zr LV!͞oR"i^z*sX}z tSؤQ1&jTDU/Ӿ=Tg\QRIy o(zV)Jiw>}Q3o)B\D f'0`~ams={U;G iZGB[b"V%BB2^YC$O.NL<^f/ i,^q~*_?;5Յ[Eko ]];%6oRy}i|;pya{br3B_`6Fm%:D$-R֒"|hvӢi2 wZվ tqH_~1LgMT$T|AŜ-yparOGHA|%xwR x(t6:Lhϼ"Nr tԾtԾ|*UnwٱyuAyPa{Pgtd^X çr5 ^bIxGhFxi>g Uڗo|:Qmߧfa}wGwwwdjhb'qEdRh)!B=& H"9,ˎɎʗol%q%sR?'{^FN3Ie8"PqpK!L,ewcAE @)=' 0a. D QRNxJq Ijxs~j^ir,gug5le[RLgCN1MgK9!|gD4 [JpyJG&TrmM˄]K J`OGǴs:8aTF,Bq*1{B Kw FӞE_ cńؖ,ֆ-?sl<<|mΗ@[k@K %bzh%Aϯ/M76iVPu(\2QJ0tb!FP,Fȹ64x=,u،; IJ52N1gX2Y 8hjYssI 85aɂp!JйZLJ>BV ̙+u AΜq;vljGXṱ [&yd#Hc I,ebd& *Y@x;BQe$-{l.l$;]Bj0mÂvG%L-6Z>06Q`KE8f&0ΎC5 6PtnZ0 <27Ay;pu0CVDPQŝ9Z۔(ux2a8ip3fGl٤-H_wNRvk'&ԢYq|.i@#f{ /@-1 50J2 x!RNr$R@p 0tc,j&L;f>ﻸ6DM\\)2B0DϾHqBJL2?YaBHY%6PLHpfk`d<?(|*ڛ`9r(Q;X^gexOPی 0$@p( (XVF= 1:0|ംN p@YLjy®BHoS*Gۂh%g wdP sފ["ZqKDWi^ܵtn[P0g-$e)nC P-Xǜ("$rLAd:a0g~㸑"QE{Ip"YFI:q {wgFRk["kFF`KlVX*j?l.'!99L]?~@+5)Io¢C0'eErϩՖ;9ZI׍y%-?Ve =%{aUj/{B;.Jm4#V2PR˲cdfX9p u:Z)VsDqq MVxB[DT+K-iikdZg\s^z'C]R؍}X.@i&HO-01_Iih' $VoJv5~@6 &JkĨ` jA( Ѭ6:&Sxxgؗ d#ױ֘) !)T7ȼ5)>h<8|(<) R-!P4 '<JG9al>Hn>Ya:"|U`(K#~ wwwgӋE`J#o_1ZwTS6/Wo_~ QHV{?RJI>)嫋Oͭ.)*h Gf#LF .Y.J.o4dpQ>+k4mNcx7і^MXW'fS&-k\I=y΁"PC>?<-y]@-/p)1N͑/ya{0u䐙 ^i(Mb"{ɐmۉy?C;# ;?wz')? 5{2\`Y,ǔұ! 40m-D`t+04I%,(r6`e3S#${5{BlSY(RxESaӳ|hz8hnz \Q4:Iat+",Z?bO"#ƕ p#|kDs[&oq@p·FuG&TX,zy!eQQ#a{R>ʳ=w۞1D7^LVҢUV,'ġXO?\^)Y1o })]q*J\KS]"~l6^wX^⸈8g:7T\y"`*T`n<8j|aW^hƜkc+tG+DqrAEʂu;3-%2ܾ{g1WDk2+o}z\M\*V9Mr%ki/9^mbDKV}i,m- !1|kwAjXtlU ⛕lnv&jY,bfy xHǵkVTZ~1εM#h%ZHM2 EPHy~ i"d:떹k@̿/w\Μ  tcY}6F tǭlx7tgQsc FE\>M,_rPHl@ଝDNV^%3:&:vQ4:Rj=+sVoԴZ@̪a.2zi}Xr4"k$e+P^676W#ZoC5k*ANA uT\->nв[f# @ F)ɊKSnPӨp9R=S( [UFjF 6Xgtx3&@$ ziLr>'za's!:=D4%o8 սcuMYC1mRila <4SPWKޞ}CJf źz% m'PcZz/{{[ut~ {Y]jLy~%ڴƘ~Ri;YPuGR;ZclPUo:'mkaȮMvZ̞͵ftJ:X-SPƝV= IP<ΚH(!_}}*ze6JTDYPH:쳇ԓ=82VC%KHeR%<чƃ>I)j()cI-mکPf-dɽsY/ @+L [tZGFiVDO m(([B'YyjyʬU'nrB$HpZ_ (p녓KCUtjN(}}oxbXza[fN!b5m8%/ΛD-] wVQCĆrLr)HH0k| uLY6q#X&j"8gj0J6')T?k6'c|p6; IeLNky!xcD jIި6A &*oWfmȊs7(>|S{3<~ br5^_kUݫ<3f9By6"fBΏ2n^xHY-)ힾ/Xr|RnsjVA߱8HNɗ[A$t֙-&Ǥ`jو4&F,aHF'rAfJXuf26-+_? :$61Aq_o2VG 5c`X)Tp~uI`ز.ycY=>Ųf61?G֒٥r^o1v[|uZUuV^USlq~ȗ{ۧW_iSJf-Ѯi,):Y{cݛJVxH@uSI2@RC! 2aۘ =o&8F6i xtS\nც vBo,_t:?18P,@ҭ24.$Hl )YmPԺ 籬9kE|0^{r$4MH)AZBje^z;e(<5bi^ȄY`I'؏0RC"!J1KFN<+MfC/S&K]qnUiBۍο9U.{WHԝҌa_?]rD^|6l[2F'l[qضUͳ"6\B%khgf͊5;^bZN8^i_<^}1C?)f)z8W)bL@B3K]MUR7z`|@+@,Yd6b3"b3l8}B6s:Jϝ PhH7*d e!@HĶB&B2"@pe.ک $c 7?Y|.(|# C:"m圪ڠ``Tɴ>RL4fm%!4Z3Yt]16d5r#b5 R6DhraaBIѹUIml$M̄޻YA~ѳG[wp۪:NAzv8_ l0gɃ/z#oF΅{n~7!oR &u7_=v!ʂb=isF;`7k+g>L1Erx8uk"K IH ~ѯ˹g5C ̻T!ϜECxJKxݰƨ -eX'h]Uխ݂-ݪg΢!< Mqni7RlsA2ciN-вڭ y,z2OV\V.P['v#% Ȉj.haoav6:p&[引JI Dt11-r+S%\|7e٭GƪiOe=Xfnh1Jf8vz}:1iۏlRs+e])0 f~#wXU,@"6CђK)rʞSTt-i׫e w~8^3Z4ٍF>rώ䵒 WzـlLkŔ=ݿgcgrͯÝtKӘt^},N7ûloA@Y9N*U>e ++'I}߫O H!IS%":% @66kt(L1rJb݂ ã@HP\tмW2+Kq&Jeʦr(EV *'KYBTD(i 8 b)uᶀ|ջ$I;fa?gcA[ 5*x܆cH!ЁһP@fu"z_t|*ُr J 1T~JKf*ת0Ȑtqh 2,$DlMϨA7K*PKP ^,~4\bVDɀt\N +TWEeBo04͋):/U8 %ӍHAp DQZdB*H&[(AUsmJkjy W%/e̬I !&1ĉBT,|nzgee29SSD`ӄ'`ad %\OgSC41Li B֛3a~21BD' ?M` $$ % N*&\*t,u$\!9w*:wY4Ų)MJƆlc_|G"2MHN (pK@"7#=Nxerc!D'T%9IZl#ʞ8#j'`/) @$B m8@I,H z%TjSHUbkb IAqGj 桻 Q 8IM tƑςlO6e6u8ɹh% AJZKM5(وlB#_͕4WL^߫E!P kM Qf8U)$XELiTuFRV$I<}Y!D| \±G+"[IQJjO [%Dzց*OAΦ2ߖęl 羜owNPZ*!cwLh'vLq?g]XB'Oz,5{A@owbDŠYfl 0?؇t>ueyB҂StP +0 \`>&:Uh[C et`YӘr߇bi!nAMub(FtT Av.3Ex[Xջ ߿ݍGC]Ģ4~fn"њdJAP] L%yUi_׽_F;3`=Wܤ=`2]|گn<>~Y~f[9|0"ib;?S Yt- ^ Ӈm뇏].6hP7k fxadA1۟(tyr09[?{5JK~ao["Œ3v > L-4ïVەrX)eBM։ $HP˰0J#=FbQhBFr6b~SZn]l4NJcGrF9 p1'w{I$XNcN&>?1SW÷+vlس yo>ᝋny^9]ܽd}/635{ NCZyj7[ܵYh|GWۂ"PkcL}(V,E@Z08 kibcWßJB),F1EIӛpOöur$%xZ@X8Q0&9;27v؋^Wi'{/S|:lyne*9 ~>E@h!n SpqqʲJHV%3ţ"$Dˑ&s1"1S)'%q"S9B7&b+y4^?\!,ܧfy0]N7ꫂ *޸qH*06-W[HR<ԦKiD]$<03*'Bs΅eGFK4-av|#CW ÷nŎBfEBʭ3r/w/838wFm7;Y`uj,8z\Qt<hAft]Xmna&BO‡a#'32&^CphCVh>H>\]Ɓ7<އH;Fv!Ӟx2&M*QS˽ wٙO f\}ы9Я9kҊ _y,S3:XMQ2cfOBiN-вڭ y,S0ؖvJBqPuBqv\p 4UMZVU!ϜEx'l In8(:8F]. 4HMZVU!ϜEx'v׍Ȼ 넾vd=дn hYV{bZ*) m[L=SaJ2bY1%JJwb3H~ 6^~Ѻx̟Hnt3^u%+p~|pі!LwdMO&c3bsMv$67dsOP ف֯h{$Ēݽ | ПEkb<G D#D1IyA1aā]m?A1gґ-67"ЇR+9IvÖqry0Ω|'I6{C)O}ߕOFqw?uK>ΏCg8Otзng'f` }QHTz"3)٭L60sڎ9mǜcN4m.yz;dcR,7ipoR9D) ("xl}H  Jb@e|ˀGnL8C5wGq>Yľa GOpQHjpXPmگ_yG f)[Ta$Lu9ьm62¡TBPh,O$'F",CvD($i4v(B_t(4&;JT2]k6+F~43cIf1_c{~II%[%S4NdFփ,VU0TAbWpY}/VG'E@$ 4X-#Ʊu $4DI05/CfP!YY$X 1U*m3 `jUDjp@C!BE4֣6Baɯ| $X %e1CHIa"HA 07>!'/K ]- fF]*2Sot8d2W!6{lPPCCozc:R}Bu{?=7%t>T?V+{wn\wj:m=CzΩR ʥ-!CwKqCI,qF L9Vͦϑ.;En@9I(# /z(JI1j1o*ߜ7x4YgVqQ3n vg(y4)L@$ @G!q4jΈ0o<\~ђ{=* 'L=A |q MpHpHaA̕F;rF Iܻ>=v NPs_~Ao2.̶h[1 Ӭ"~OLm![i^6żf}eZ׶4`!딡^D=,$1_g*$0a@y3*{/T> bP2Dҗ %=pxtHMӸ-d~?oaF4 t[KܬBHm6#b_iʖ閖gP8%^0D|Vs9g jC'GVlueW.!n>ܝذHϹ CB/Ж%gD~5 U`gb38ͦ3 (XMr&p7Ԗ.G$0 3T/MPJDٜSr`)O&,;`C%\BOz"+E-ݗ{#Tq'4M8%>^H"ՄAa"4 >p6~qf⸻؇I8fapaH! 9b?M}"DG @7D^OS.Ƅ񸵺_e^^g6J_+. ~jS PݾW)H '<. i)s@ s8Z+i)JzM2 $Mx/I<\xOne7 E׍KC}=GRP8‘o˹tV&.{~9%She˟5`ԇ]z5!R$$TQz>O΄s'PiL@x2}O VێOyji.?.˷ϳI)qT +?5Ef\Lgk~w7O#odoOԝ&l ea%>Jds\UvǎѴ~;աɤ-<ՇZhNLKdުM=wfĶin=l*gcgm9nCRK٦ Qbviu3Nѹ핶E|Mfl>LR]^̘O+7P^>p_2@ Ap.2@8Nֹࠏb/^ {[l0w-s7 r4+DZIAL>0=#fZpA* QÇ>F@,7iX{%u2yu>*k:[H՗(] QvU!jrxXs' V7'ӬVΐYGu|jw@"sYV{ {TV ˖&NV;XQ{u*j1h9 QHfa4%qFPM@!qRfZx6Ci۠7!UE%0%W"Є2DR0z&~8j7_E7!k:/o5fVk5̥X)&L)-AGh;<9Ӫ>EˀZ?УL-P=?MN-N٠ٍ ZGx^Jg$*N 4ޥ}b˶}(&Z-W,.cF KEZTkGe>D/SBR})8"i7Z%^ѶxJ iɂ~7=G1^+?T*m񗝇ujOPƂK|ȶ_&$_Be RX ?|T`@  ! y,#u)-6XkC X+x4KfrCuכRIKedcl1E(vؾ(}7y/d0y\bB51!cmXkCe[e"$I}*+yJ[H(Ԯ!\RJYl c@cb>N$ԚW&R;{x,i\&%;[Lj @’eao=2nwȇbʼn4Y DޯԱr1/J Lpu܅uָL5!և u$,A8jj*%AQ?o2lF]$[֎XP>|I@B-< f,S&=[e5j ҷt!VI74.5"aye.CH JK< /T̷+^"t0lU,͗ߎNrc>r؛! e:kMc@̾>?oG$:t_:̚@| S(A ( f㞶ۭ+Ocen+|0=fxH0.0P"=%FMht/F¾vdF ɟauC$-q/C9i ׏)tM@IB:f4S.Qf'^NvF(q1ONj/M`)9 dGfZ~}y1Ô$p'cn ;Lw[DSaAR"o"b#$hZ/_=4j?#8D1m!evAX)Îϖq:ELL pa$5oP}eJ)C8p6 !bj} 0!*mFoաNNK{U?9+#d \5 /n.mwj#[(%aOc_♏8NDss gc>.\O<2(E "UhzNaa1Z_U흅hZ)} )܁VyjXkRVm1CqfU 3Lty$_Ok| uʨldPb4]PNGl '`'YR gϞT;p˲֤ G/Qg/_N>|v9~[נaW9O욹 Q63$|NZ'xkR,GUҢȘAKupJ$ǟw3X͟jjB'eu%=:oΩO#}N\o AJƣ0"A/4!F #勶A?~rtA ԢEa=(T {1kL3Hi=(Z&E=>ɤ eąJ)Ѡ~O1/#&dQ8q}IT(KHߌg+1e*D4D-  ޼C+}G uRpI3tR9bb"xB9R))^ 4^4-+2NmBMtQiվ`c.\Hu«,m\'f )55gqGtPՈf0T LOfM~HS A@Y'2К Rz$G!"ADO8:}rRB&]b)CDG#$ ²N5iC "jj 1%6:R$CLij=->W—륯y AjLF=i/#~Яϛ#ۆ%ǒmox޽pSc䍲ojtoB1;Jo$j:?M&[oԯKϤ?u]L7 :8yymk;ɓ.ɋ]a S=wlf"Woy|H"Obz%PGO}JCew1u۝N)dtY{z䛯re\c 1ce/1WW?fagGѶX2Gh;W^CqZ툶#FJ ĵsqDr},IH:v'7K,N}DrdWWVZjSN;ԪzSK 1N&W&X_y$n;Egf'_)A#\h^)<&^!4<'yG}jtZ -';Fa]Rw-cWC"9 j5@蜻wԵrj;pHYf"[3eNNj\ڦʝ&ytj=U)x kjN k eV!,dRqϲi&5HoI-#qt+%LҖ J˨@HIUE7 rYh)À9굷F /jhJM m6s۬ 9x1k3 r\w[ Ƞ}ή,YНrݣn3?mr.Vk!SH3lf WW]?9^Mǣ\V~!c( a޵m$"eOQ/CIN ;३%͌'栗ԅHTb{4EW]R;\/>(ФR.fWl$ñ\ ٦K)&]W$@Cl@幗[hZfav>,%sA"Rvxinb _m޷1/|x膏[W ]MƫF*Xm3+b7HsIQJ?>x)WM_nZ5+VGGdun;J`BudAl͟ %_! vQ̟ny' R 4M GlqyԧWu؍ zF|=XMsr`w|"C/uXDP8*%e`W65$&Jnd|K*h! ւ=ӏgT0 %RiRVqqUD)JnjӺLw/l{_~-gS,IxypwztEU&-qA@N{>} +WGIh{CӃ׌fQ͆ExXFA( 6FXI_ jb0Ic?"2b{O|o.Fv0 X$=R]H " rуm\!6ޫ\6+1r"T۱Beb>?8|*FG̦YYOtH)]uuL"H{[}ag=vd$;?] 1jCIWWAdzSOqMPl4$hFQP-U˧I$[!l$a&f| 3\]ś<ڝp<@@P5 Ke9] Ɯu 8$X3}l\] #NV=s1T 8!AA 8rvkAN3;8sqNcDP9D~r]!{Nwî[';p$)) d??jĤBIf KYl[nͭ=~izc)liXVx6Lk僭%/6_U2agô,lFyS'{?xjw+WKCS8#H쬅4~= :kc޵Y؁{ao'O(~o 4E~{i[EtV 3 ]WnN82O'̌ճ1ӞF0`a?6dMXc!0qM Z?ʥߦ]St=Y vY~uzn(6jP3̫$g Eܬ_/-hHT+Ƃ6އWJLRra(ҙ܄)v"dM 883ʛޛ\~ze_ ?x?~;8$qhJ3lxpD=v͜ᅖի ro t{*d+>ʕ/ .adV6Lӭ#4[{S7xN⇺~KPh MHeВ{"xEVW9??8AHeWTؙZ?xTK45$ń;Kd tE}Eoj)j[X}m@Ze!&2vZ4/Cz_o~zw:(Hvh ҅:#!p/qC-LfOfMCpuf ٻSL:ܷ/e&׺^Zg~k@mxb;wٷMz2u6/>%yh?]f'28Yh2gXu>]V%\ߑf`K mkS%>m"R۞ĄUz\ #oO>oӎ?Ξ8'v\ ffW>EϞSPVs4V?MZaY.eۥ_3R@tKgjZ(Xx3oRc\hIQý+ Gߋ{-(V-'xBgq6 ,gݝ?i ,mWAD@D>$eRHkUvW>|\ #D |j"Cl0󱎑 }(a&q2Sڣ'HOvCF`E'G1|_t|A%E~4PHDF(CG:#Dqq,JR <~e;@ LL67&77w'{_awGDxxk[zDoݸXcһb/gf/W\{8EƶfBq$vMI\`a{I_?sCǂg*ro:ъEhQ,KN.*=ðC\7,}RL0G8APIM5˜iU sr!9ݑ)1Ȗ؆9.*y_}Teu3er'gA5xD2u(0HĂqcurI#@jklF>4 A0Qd'B-JTFD}J; RBkG8j`l~`rIg$dEsק-A.0ʋC>?pBLP  xbwq=DA~EӃ2#o(t*$5̢HCHa8(27Dq0N `ͥC/<)EE2ЙėVUs,A9XcOx31 X{Jf(`aoh1S$|h C2E T+f1nkҙ L~ôŖH7tl_YE`g;ч2Cxk-6Y?N9ѼpѪ\1'fv++z;liMx2獳ɹu2ȟ_Ӭ2t XtEX2u@1[dF(ڎM k= dǤ9@`e7PP݁p>(ɰfmvo&2[B1G3/ .H5NJKaHX@x6@Xʲe%'קy>s7.sƯVQn2雥LT#& Цq,:Ebf2LT%jKa^`вbz3v#Em.ɐUە]&9oOK2I&BϽfɡhQdz'5y5Sc~MZ%P`]L]LDLJrj~MbQRe]ֱ&k$ୗ3Kօ EG *\VX͂-5 Fo UxS- M7yٿu0c,ܒaҽnRiW_[,q,F˫m@UZ156Ϣ{,&cThz Qkw>[$uEډ@;IF. Е be@JpYz?5^rMEj;bw&e"EuZI:;_H^7+ɒ*)e5Jk@}M SاghvT%~Ϛ!js/UqOw|%X#jG$T꒵KpR+T[~R򲮶fbyEOeV}Yz5I@yֲcZN݁w*Xܯq*p.mAf+ 9'Ǜӯ6@sFZo2vn>h)r7uέۏ,,jgbXw{ܗe%&rKW|w\,1vn0Gז&TKu]r1&`ewq.smW<]ZΌꛄ5R|Ze5(둀p7*jTjr+͓S56g_.DTD+ D-r}* ,Bż=Lg9]@ӯCDX?/| ͎ KE[Fa~F*VrМ-`9e|[1~mghMQ+b2G`'dd=HF#=|< Z3 FU$F)D*&"ET &kE8!@_42|.kȂ6谊GmbP ba$ۇRTf32Nj7~s!Y>ݦE~ LLhy mw;:7|py{9< qt^4}nM)h{XCxI,K»Y+>x2U H$Xεi 1 (a2!a4tD"gPIr7)2^a_CI,"{9ˌ!XTͅ8Nh1W.@搲GIj.c8S*DJ8̅"|T~-_c].!#YȨ8)) +H{4CF* Ĝ2ĬנnD=)k*Hq8-fI4rƟ ٕ^ {<-o"<&?}z%Whq_뫧 o@Ocp, ADeRο#3ӛ!xVa2rh_o&bm2ip}]8BR~=;Q"a!0_u~wF=Ӕrn8T#J[C`}=kq0>kD>ܾZz24Sg0v{+t-ϫBO;x`nnƋǀ':wʃH4#+έ`$xس}OEOW9j|7Xɝ*?4qRO٤q:_xlO'k" OŋדI` Ϗ[89B#' l40xuO'v5p6ͬ߈g#6N=~V>>Cjo!h4kJ }?N h-$C7D}X(. EsљO=rd>`=,/ߙ%8Xy~< %:5dyk9'q8,`\翀BN5)40ryS)j^L.F6j:͊@SJYI< Ft)MH9"Dzj='/,&˜M,(ÇLg4悇\/^!_(1 [8#H_eŠv枃:-^Z\Jd!gBRp&0 229IpT<AF84_r vSdmx2;M2fϯkQSM86Ic Lf͌w}%H3 q¿dhcG,0ڴ䚅!qʹ]ųB!-Lmhoib.k@koq魭;4 GZ4Ma@b0$T4 c,a)K@$!F"KĄ $: YW*R}g`<>0S'TNo?3QK<,bGUJ|-?;naB &C +{lJRdk+m4sc&%>^p+rBpO$TZ˽2qBp'@Mݏ0~Pҕ~L5r`x0#{?d|`CSZ \Z:&J?] !%οcd#k#k#k٣ei* LXD+% QdLC-"93! #l0T;PZ4m|:3%j貭PW[  +u0S`jeM~lJ :}:)bR$D$&9:7֐ROM~(?9ҶR `ް&`![9JbxSd(Hk )-(ImWUAyO?V+͸n$ڗª*RLY *-T}Z =OyHgUFnݮǦkg>ŗ"f*_)f < r>!dFֵI";^7i,Y졜ɐz{oҔ :YS_K̒bq+ Xޛw?b|sEet52j`ʤXSgVֲsnI! u!z% s]#vyk\TO0U"EqRs]V9Ʉ~ZIY{>:wvF>Bch= PMw( aiFW;:Je$U'4U,HHTGIj3:I(N)E#/l}bMZY9>L8ckm:$r*[ё}Qͭ@,oŲ;+,KP.rc!_ֲ)uV{%w[SDe涯"h_dwro0.1t՞7ߚkЎ8[ՁPT*VlY?8[D)&V%!p* "aR*hC%["եԶvdxf,݃NiTLt@ɭm'v.bZȎrA:a'y(I[Dm'm.~Tt [ ]*DY8K\Fd,qK44!AwLz#44M9nނ*f̽!,BցyC!;6}4n&Yl$1Y-CBa½O$- ePf|}FFs\/mJHlС_#[Ma:&_)K;ߧ/֊ US os: ksڼ74`:f7,mi npy{9< qt^4}nXN29/ڮ!dYJCuʿk#5R,z!Bz/! ˦+K`l}nlpſ_ٌ 4KXql??$TE6Ud#[E6UdrYx>S@' A$KSx$"9ֱ;KU(H)_42L.kX3ŠHfm$ڑ/Elڍ0ojx'yhM:7`VSv+K.6_dy5?{zW\񖂖Np[p$mXg|+aB &C 8ECAT؅xr\QnCH/8ïXX ώkC'=o<=EQu7asq{Օ ]9n~y @HIﲻ[GrkOzgղ% +ZoW$8 -ЄF=[US?tSVL19B݉{/f' ] Ab{S|X: p* X2gqkSgq4[RRk1bH} 8ޮcPrApj׋Q";9eul⠃p@ u=lubcXVPu7;sx;1p.f&5Iimdz^_ ̑ܐ|Hay€y7G?=}1SD J;F49hWi2M>I=x1Fz-#*~x4fG4"FX_wdH0&O߷'cG/ Lc3% I ".X D b4HOk=E)H3]iy:^Vy,CG׋QTJ" g'w G%X(Z(,xbO߫! \ŕ~)* eW"xu `qɨjU4Fq atɲQ{L֙ېpJ2ndW+Kw̛+OG0##@^ůr]mό C.lHƑ0K(gJx}5^{mmr7}≯ n}L)&$n_80:*CRUύ *w=q.v}ftzB4?Jڬ鿗poX^ꪶN'™CcՀQ._5d<屲yJ8eN#* Ly偐99ڜڐ%])9N؛#7\cxMpN)w5lPVB+۫IP<;we _uT@0g;bTM,?;t+Coo6Ir8ɿ&IJN,^0E(4@=|Kh^ `Gf9i.cr]]ݏˇ5EjlLq{n_OiD ~YXKJTڐ%zPR;˷!9.}zsY' D-2uYbؕrA"[Q!"Z[.5@t \hROudz⴮UVOಬlҗryOHa:d5<r=s:mn(bCV>HjtZ R#Ү-% HռTӍTΉyx W(/8M5lɽi\9fo"ϸ$AFŜ.$aمt4۩ZoEhG ^6 0IVp筠*zZj%w5TA72+%iH(+4.!b0B: m<b[z'V%Kc l6qnp`a"'XWuX:sd|vSo{z쒇Q>gG6aBsݳWfC]fkR0`u"Qo{;d,ѨPNnge2.{٠gTVo'.kR-B>n^\Ut DBT%(?q٘ 7|?RRt^!b(zM:떁;HtRY;/\2ǐQΑRsw"jűz }Q<$0`D[f+ǕpѺh1T׈QOAp|K4H 4b3z[ՃN>a-"C9]IKiՇnbwQm0BׇNׇD-~Za:Sa*)B:*EQCVEwjEQUw`NwEWYT5xb.} 3>@H@&N#Ҙ' &[/I!DyqeѬ8^AV,S 9QDeaHPQTZ_>jP&0FV"qgY('@ qBIw(~D+YmBҖH$gR}uR-uvMBAck+B90Rqr!/aiVSʠ4ł9\ւxjQ*D@ʇ*Κ;OϛMfYX5^Ü:p#lKSk4r*Vr8R2k_m `8kw>}Ք U"}û"{: #Z۪qӻ!k p$ jtyՀhp P^4rY($q&7RJ=&f>e2yFtMu9]Xا; $sBm{%f ń l8k"'yJҤRGR)p+AҬ 9Mџףwt|jZNt=0>GY O`(QgE>h1&<^|}wG7ds:Ww{[sI>}X^tz/3>1<9OLhA-yjU«(S.Zx% Qīj p׋Q"ӗ"4X,嫀ŸTS"suby< /D ;FV\dz|+{&*>31c3+,tZ[F"@lMo]J$P>H'h,5hd]N̗EfLT<??g8ʲuS>Ǜj2W2=kj:3Qҿ|K*:/wV &s3[{2ez97O9ACT<97h{F!=VVO3IeɃ\S3yTl1N'k<=R 6{_CܮUNsuaӌq0O*B9QL" KQ-a48B~ˋp7/f_7'WW9)M r 7P TcLHS`$%N0/OE0\,1.1f%6yNNj/aI) k6p] -K JL$O]bBj}bs!LВalmNA\;D ֙Ԁ^T6 STS*Bv:™RnX aK:o+cpiH@:D'>])Bo };%Ep9 EI^;!yاEcX2+'#=58Ӱ<2 ܟ5S2!1BE{hBYF5Y5F3);igA}8 %EMОU[[vS|صB6!x^<^ lܤ9.7_a HW8Osxk_?Ƌ7>ܠ:(Ф.Vsbx,{6fO ۿRZ+: (auYj-JS͝1+,qu6B;wUl%(ig#nq~q~mT|Ij*rej$eHC/ĿmQQml`#xZ6=R? 6RH mQQmlFYSuبbO iRV%ځ-.B1.:\'!줶L:N*Xopg?W4FdFvE;j~Jhoj9cb #!bYGyRe9rkuWWKmFZGAQZmn]j "XNgOѝS2jTI):W?5RKdH5S{8)U7t\(Z~ (.cМ-o83sehw+@?ͻ8O<~DO%s :yv634KމyJYePŁXGu Xn։b;jH񆉝:$D&;],0+տV:W\J\sevgK% c>-LQcȯfU~&hi #yR?(+JףÅbp Bz-YkD1*%cϞuD5b%wxTM } Z.jo F)>nvG>-GwN (JE=MΏ胢)S{WyhS֚ίVKODC:d7d0$Ūd*LM;Aؿ]!.hxnܗr7xa:nC@dH٘@C٘ ^Z Q{CZMigגgFqpCf- 3`xGqvrR8@֔ڗFv!~Tq;/k!J6~}i Xa5NC~ZEmSsZEsM衽DkTxi L5#}귉N]Y,۸*jo0jF}<㬨5xԢTnN Dߴ-Gw/6),x4[鮇h.Gw//J"1yݾ*G fn֔8@BnHo yKV#pԓ*r$rϻEba5bwMBQ3 :{iAmSc0*nF}1!>RVpb PL4'H@4Ji`qҁk~O "jJ 諄OUhWVr:5V-_{aSnM-.%R1T8M]*RK EyC$5R:Ev8h@e04T N !B`T38ŒB޵qdBe͐ŀ3vyXgJL(K6x{IIME C7vwν ʣWqy$]-܁&svv2*"D XR8"ڒ@a\VsQ#}#LirHa̮؉=aMR xE k ^"h*x!3DVD̂#F ݘK޾]`Z>5Nc&,oƗm ]tTGv+'ba N?S30EW<iI̻[7ӴJ]g晩Ĕ&9$!{xI{ׇ{@!G!LZ@{"78JbFgwSVB܋\#ōH:LA!tAmeZljeiyϷ-n~͘ zf0YozƮUnXȘD8#.hb‡z]BV{I0!Bo@CQשR&9r  Q23̚I Pq |RA c g-7d[孁GDpts^3! 8LҲ0a.PF*[a*pOmYўcT[ANV0r8*## 1 Aш͈%@CJ$g ˔֞_8 u,% ى3!y!}-7Y] ժb6o#*/Os$=gǹDxށ#Rعe70=QʗL>W--O=`JKL3{.4^|@]nhQ|>/@PJ>W1\9v,.լr_n^J>KG|TϿPd(6"=KB#\z%5b^elC~ǐEϿu%wYSI>׷+֕Vͧ[K8#9Jܾ ¨^K"Hz1 jZr$@EcF$֘ЌJ Hd#Fbu3;Ш|2.6zð87 hh$z&pdsCb6xRg$#qe{|yo"FaDGpkVP%AH2挴y@ J#B5#F&RT mR fJl |䦚P5Q]Y^%h AjIK P [PE熁!%v,8k|7ϩx W"d&7<^ b" b%D#IWO(rYq6 CZXa@i5-ہ/n|I|_J\>.Yr)9}}sy{>˴|I(?ؿ 01ˆ"j!%@O_Mł#gAxtVh[7S)FPig>8 Dve0ADg!<*q DVǺ|rP &.keN@nGx' /dW)fh7=,ob"\tՎ-SS69Xpϱ_UgFf ɓ֘?WչPʐ  2cjrrt;$Qȇp‚ -{f\ ;ʩak:ˏcZ깜ZXlXuS~Æ!^}Y,@^<_i7qΐm3 σAy_g< :ӲV+2 rθkD{1Mg Սúw[ p  Dž뀼n3"zL_K~O {=w\)a?/odؘW cC< hHvo/޾:Sl?鵣-0CΘZA*kk `&oF :ʻY&?3l=[\`]ﺭylF~0l@i/f0+s8=7޻؍l\rtU>ðPaYF\vtƭhyydsTcfJ̔N'PG5Qf[v곑yk.=: 1Fށ0#څ\ٹ~PM9h.s6 Ƴ~,f):Ni:'}q0#_.MDž 4ϙp0 I @?{OFrz M}@vpcd^&kDL0~O5)Y+i [u:{?)ds 6 ͥHNѢdc5&)r4o9ݗt o}\Z;Z$֎)LSe!8& iiA5(B)0cYcH L,H,sjҘ"`U: ȕڢk'q_WO~붫Z#*Ċ&&Hv8'͊1Z8sĩ l>9{Ru5Ro=m -&BC5 hO "'"0o(q0VMALVzswJmiwڼV:մIH-x7;o3KEjl٣&-uUNĪPE-sbA6{ʜ_38z]0}r.twbqڵ}&.zHw4oPp*R̈́:Z1~uKAQKFٝI:-{ " eO3@*鴱wjm4q@}$ybuTܿ T(PG0 3<`,ϹcevT@$;6; S7p\H(E"hBx!Gq/Y֚ePAE txoP_~: _oV|UC~%$ABfB1it3Μqj.,H>YJS:"meq$sObDCb|MV4fECQamRR3t?DRW|Ld)_{-q8GRt?Q Jxfב~/"W"7FgEA-1 XcVsLW_4)FN@y $9Ax%4AԒ 1Ȯ (:`DZ6 \\q[%C#*Ͷz(k}ҸRzG`NV1^qiV3[ۉ})~L%eulf{ZMՔ@)<`/_0287v슧|䅧-oROLFyn7T'N%N3 SDK8AZ ;oH񔆈4W`Y-"6jH1uzot:fo'FmC vN ɮJ3y։%I@+0$TSN{m ,1F*4ZhI泵M1g5"9qv6‘l g{e"QB=~VPqQǑsbbC$9 JMFHS]̘J I= 'Zayr[P-J.K"1#^f5-tfT~fn^6G^Aj/K#X?'c or6*۲r\>NGL2F:b:8 GPb=)n9#8ޘ[XR2͇&W)ƀA] > 1Op/ryΗ^y^5~ym\t˻V356±C*0d1Ef9 sas$w^8%*'fgp%j^)^rk#rz`r`̾LI)ccl_P(<}M_'u\Ӓ͵2&2y@\__3PK<{9I.f//vϞ@%.P}CG_bbԜ)eE)Sx%DgjT8ͥ N $$4hJ\4ќc8JBQ$!*V j4psB˭,f8Ju~:_Ux:d ;]^z;6s`ΧT(H?1 paijS?YN;c3m0ݥgX7ooV\{pg&rE / \q3`NL%)'KjC[`0;ӌd$3^` s)_+,ywMJ^TN뷯y"x|s;w8."ν{ (]sA2JLO^G՗1t MOk")Ohdh)c h@놘Iv'\$iN>vf J=)ftndXwHtqHZl\+.&T`_Oтe^8 ?,_1[ /\l03DKĆWEJ") ޙtXDt2NCSM6Ɲ ]o~"i \/6l6B!3{1!QS&8bkxs6%,S^7ݵ`jWgęFhV7L:a}>͉@N0*A`7gMɤX6i ξ *eQs_ʪ8͖SFF"ccN0<݌~X"cq{Ci~Qxe3˦Tl>K >QLFfO@O>?s@z滈. g}? T ɑv8U%#q4z_bbGl$Nܼ3GKFhmb488X=383y^!aYQeh~3q5@~buyStn0OD"Η f-ʒ0 +bqc)=. h-‘4ÑJQ׬㖄nj`sT6B/^yy Ga}O nDd HG RTs=|=K#5)8WTl: 9ȬPR9*zG=CV$\8/ mݖ4_R"7oQ}$VDW/'7IDI'm2E9Lڂ{*R@u'(zX:."tU XT`CmyK+'HRh&jQi"`JA=h4sƃȜdTȺwZ֭e2clT5KXJI+jWVE2/bV>#`G[Y5YM2xȘ*˕ɕPpN'NMZDZpA 7g𕐌y)pfv^%"iۗOjyd6T@O܌s_]3W2|il C ^؋&Ҥ(xEdHo~=P5/MFN_˔\G@%o, Bf-HDxc Hw%3Pt*mH9:^o9QF1#alDƬұG#08w9z`~)SXeQF=iTF cNX>gԢ Gn`I|t'nc_RI7'MɤJJؗJ$7#ưP,=&T2j r3Oq+*D< Ԝ$ac3!(s. :s5Nÿ'RIQs N9~Yi3|DHzSHЅ,6DzWM;KQh5#19_XWe:iqjzVՕ()/ϝh+_ݫTWWBЈ N-JHwm ]>$T*+霾KKUC2XݥтZ5ǟP|m@ @ MoTEv@*ZNJ-ʀ`>Dv:&ppb*kd1XTȒGh+q[B,8./~oץgZ`2<p$hB3&^[Oߨd=@ >NA5=hccmHbqB+QX_iCQj(Bgb 8 /P4!Fw$]\IEP6D?{ȍ]lټ_"  g&/;0X-D-9I6CIK.,YUr[V}<./ Z8b P2G "U< k&_&tF D ƌEVrWٹ®TNrRPAF00Nҝa–Wl)Z(vV*2j.DJ]/PeC]Tlg /wՆB^&TK-&iOud <)B2q:6*xA +3$;ƹŬ^MY!-e ]ƑBC7q$ θEH-Ό"BQaW##P0b)Rh9ӂKCh13EW>Gp/R[ `J.S![r 69cd(Z6.-Jn&js p6A06u*L!4e磤Ŏ3񾆺o]fq~Sqa,&`rX9^K$LA dHqT$ ˣ¯uÿ8nsfė\B<ÝK+9b)W0V{gZc|8sZ֔Db pRnEv~)J h3AeLH+(z{uJt[T j)TI- [5 B/B'lnh h4e%V %]K at8 IpZ=3= |zi0 ^Je4D]h$e6˸(zR/_k$]SI-KSYFJ*UKվ8r$RFI 9V 6^>T HW4wC`]j;lsU&2!uCL bDo2'(*I./Bbl%,+N)cZpjiBZW^KE"/e?ӗ{ B'Ӆ"Nf,>OgUYofC ƌ#•Ԁj dL\ NʂkJ+n4@hCL=B|5Pa&"]ne}c74R\bu$.LfZz yY%c7PIxqLnpS`w^g֭ek$a}>C'Nd>r>."{w!ºJ}N^c.d# ,ѳ {m%Jl%=|P^N#l8P:uwck|n:dEݺo#xЇA$VB;',!2fUxg]o.r>]oӭ݈u~Bě*~ _p?^fItZHe{ͩ\SfmJdz5Gn>Cx#!lףxϠ]n=.+TkG!!hLMWZwhݚb2tQǺugdn[; GH]#TE??@+fO|BDMPX3]k5%:|O(k.hf &M& .XIieW*(- 쉙p?1#9Mc.JI9&X|D}\b/J] `;7"r6Qͺq\.^y?I18f$CӛDdX)EY!@ |oN4Dc-t:+Qnv %/{**AZ QF4z4ƕg.`O#±J|ଢ%8ϴJp}HMOJZ?E-ۓBJ!o`K Xoe_czLWC_ٗFt5./H$2߬2̈́R]畄~бw1@#(邖sw#I+5SZ*8\յD5CyNVub[yTyRp{h܌{vP6ٕU$U=4BX"F^.:&z@ *M7"E ɻe1P=(0iMĝ+O(Tұ5WپAq&+}ؗv{t syFUuTd{7RǬx"/{w~f>e̙XaR#Z!dIXMiØ|Z7@}}'Vv/VS՝xP&2<7$#mY5i-X/ 1婾0Wdžy3@ ޻{ qByafG6{ qu8O+uW+ĿV~c_T6 sX2IIgG¼x%:БNԩ[,s=OJ>'4īՖ!25w4ؼ +3ÿGS;1B%V$_˂Ofb-T;W`A4)[{bzJGo`N{tEBzօmDGɮ<:!ܯ.C:9@"if;AͽvEY"#Ou#[ 3]qiͮǜB=S($гxhaJ^lt@>` }(I إIfFE|;իnĽp{JY+vFbٻ5=#v<{(oZtԼԥ0!ԶN ȹ@M eqhaR@X;Z=( aK qFFGt.#8Vu$ͩhEJ0bI<@dK[΋N qFG;I8{]xm/LWM @qFFޒB(q>}jZR("i-%%ډb4:w֗wԢ~!@tSb@7Nѹ$*EHl'k߼9Uy҂S NѻJ~4iy_4~^dVcZȧZ\#AN͔pW uk{ٯPW%.($JLJ%h%ji%7îx"؇DPB" CK&`Hq9?K r~?$ Vo??v1I1 ~&tdB i2<RI1 않!M0; DD}04 b1Q0Id=$@Td}^5/#Dij!< G!<lk4`H ?=)WW C:w]6N$s,S!Q4\€WW9u!O)vu%dHR~ aAI!<<:PgXm> zu>{R's a*AְVJUyy9x)࿛,f?p|:*,||r^\壩i4ڤ~ ^'i`Wc=6O֝=crF.G~'V7 _.&ccyb{ >P|){o^t描[- # e\y)VJKbHE4+GEF^O~08Un<-Ӈe>5m2_>/F gUk0(\9Z>J>e/\0ѱI_[j #ͽyi7mھ\1_;q闩l4F)6f]%{lF0;O|Z Y`.I !2a%x~H֋72oJFmfX̍,3;=reB!eĈZxK}[Tu,{B Ιт#I1XH-?KB P-ha5$ij%1=K ;bGKUHBfxUX㕝,)-r7BUc_&5x᫢޵kbiϙK? tE/;(^ZOۖ3-6濟EI;ieSmA2 ER%AP @ Br#P};'%!mt0 $g0{ P-ERbTs`@<r1bCth5Gچ1r"-SʽA3ӹ qes@ wAJ\RMX9!WD 5k i ڳ܃%\I0w!yǽphyIi9x |skj8("f\ߢ`}Lਲ਼F;C;<Ò)=U JnmLT;87<"RIB1G ,i.~L9p夈E c&cVr2; =vd~G_2Iï/'ɑLSǎ_2'Kïи W.b9YU&zp>_r|"8דdC[lM@d2$;L_d&;F-XVpCe;œ X4J0e-i=…D@9*sN|6]Lx]KDzoVoDeoyܰW7Tta>qܰ]ۡt!ŧ )>]nH >YnH@Ju)eI%) oejvK 5v586 ށSTF,,kJ0JM)D)5Y~~[I2)+S9ǂƽ]Ij^9l׋Q_-M_7Weɘ23K(ȱt5 ֪WU֌uHo",XĹ jEb>V \#ˀ!aH0.=XnYh͋-p[Fzuw%-=1vx \O}AddXޏz>JO )yq@u?3YAz1HJ/{׃3$ [y-\X2.'X敓"Zs Q^.ߪɳZ`Ni5]\|{S=t_[q?Saj___v ݬ!y>g^+:y&f^F;z^Nؽל ]'[k]-$ya^Cd''.{-8݃xQClrݛ-0ch}/FxӠ?iN1w76)%z-m''?[?5ꗋ%mNB===/yq0HQ1\Ŀ sX?7$RYFtCn1{lܢNneѭ 9qM)FlF7"bPtRMtko'TE85[r&dSuvk#)"!DT N=n-XjE[tK_jW35a!'nI6UY)pMTfR1c:&Xqޢ[bښѭ 9qM)Z4QUHŠcbtҗ׌nMXȉhM rUADT N=n-ebjF&,M4ɦ:7@VbPtRMtkqE E-o5[r&dS*$1AT=2ꔢL7֭6`!'nI6ȱm[:v+?ؔ&Ƕ z]}$uJ[%؄'@Bl{5"u+ʺezƒd$3֭>V5 \cۃ VJu+U*zuq{4nUCIOc;+zM*]QOԌkQQC]naSc;z5*k]eIO`)_e1ZWYkv>uuf=|5*k]eIOdS۬ .ZWYkWYU%U$W~᮲֨'HCڬIeWY*kzGXYSTv֨'(U֔ҺuF=Ast|5MZWYkӞ@]e5 L㫬iZWYk$WYӚ讲UU0"v֠'M tt58]e5 \uf=Ad+kv֤'`$U0KUֺZ@GWYNWY*kzaxyWY;ؔGWYL UֺZwݩbe sqWY*kMzWG,ڍ^CZUֺZ 8WYZU־ M3$>w6?v@g|ɴW/.Yq,`?}PdqS+ecIgv_66$=+qƄR#gõ XCfw΅)LlG T]cO4'0 ;1bO 3kj TP\Ig  U!B js^XF㫈i3 &BHCȕ$WkcD#0Z!l㈂z=51h^8qj%  BPb%H($^# /9^{b [OX {ŜB<7wFBc N9bBj72BRD)3ZZ!/N CZ:>*7prpR1G y)cH FrB]B}%z n&ŵ,UD|Msio2~,z {n8+|WVdguLưAzY啐+#\mN { JW[0H^,C!s(8\1Hh)猷 Ⱥ`ZBz `,2B)X|68XrJ9;@R3DP[cIo$հG!\=p:-?ϼq |_7~$/; $,B>eB^LU13/S^2B]=b8b.߼xy@ĮQ/^|bca>*2D4z>C:+x ]3w3#3/9fbBkpg>-r`5dwLSES!7;a>3a8/f_r_0FT.ЌFv R+)Q`#ED>dPJ1:(pAf&77cO>fC=kƽq_2z Tʮۙ-ęX5xrX.vE,_(eD.swprȌ2şQV-ǮbƟޝ1{Pb9=Y%A.n[HyG1CjS| l=-dn8F syM]~X a+. !w]}6RT$z35=! `{zT̽Vֻ/@E]Yha8^-[hWh ﮭ&{kn|Mw4Qǟ OrkqHȯMD|A1@8fnzo dλw]!셰l  =#Ie(:-۟/աܰ.rCr?@i2eq۬] 3S; BB LEyP#c: ,1,LV%.x㖃l!E>R .W@1rHa(i}Ubt5*E$骳&DNd##:WJ)FFe S )CVa튶R籜?Oq Eno?qGqAf1*27lf~1DUûW?mWj;R3&/8~t~6 _b1#?u_[V|%;syYQ)"qbm Jᬬ5sr=o;j? ְjMv]W,G|]_%7$>+ji{tzON {T~kIξ&P?-ʉfjg*%'$ABR>ҽj„^7ZQiNQ Q9GK$AJD)<#ٻFn%Wy V,r<,bf ׌d&9_Rebݭ_#K+ŏ̯,%~DJop lM"͓Qqb"0PFtRTb>>Q]4uWUCXOSO߂0>iT H K$tF?Q.T>f,h`eZQTGE3'8{QW<^{jAᥣi]Tۙٵ:06F41Ӝ"#Q+$0n ĩ8 cɯAsb$hɬQی D7jolk"fo][o7F S6~SJH|Ww(;41ST9'8G)bNPIb<ZJ,t-PYt{'Tm|BD#$Ej<ӣCcD QЭNiLȩ7>^Qǟp(%  /u])F `Uj jtcNTz.5wt+zQևcrG'Y>dzrtUµȿ\^rS' 'Zs`S2Q[9/0T:;FMN"FwFc:jD*蹑L ۳NuRѱU Wl 8ϭa Sv9_mᖐ$0bSJ4u#5z6W/SHf[#M1)kZ F"mqW4V%iv$΂_|犐~4{Mk,37JM#ș!qzfW\" 2߭BV0gfd{q{TaaNUTIiQ!A%TDuT^M8[XJ6?9SsnHLG޹}On͞e=^$^:l7n2,dXɰȓaў _O.4$:GAq6VJatT9k"XeHD0i {2u뚘͵?%9DlPI-C$p=Ճm&5&/ZrVO D1L"Q6 PN)q6Ib@$"N%5M #X Q[-+$z SFT_=nj28"EPƦSV *}N:)>rJ%ACdQ?A^i JdlToVMc+$`aO#1j} IF&FOI+MҺytN{7u&sǴV%eD!rhR&Mpl̋\;ITVBZ2nʖkVpgZD[Ƙ=|6P~B5{Ъuّ,>ꨳQ#[EthmF0,BA@o';H5. ԠPł:ĹM qyCRq*ݭWѓ*v^)d.Yꪖd*jY.TPR=KI~,h~t6ꔬwx!%k =P)w}vR-);Zn॑IZ. QAD(dDm6H9+ G?s7Q 4R>:T:׺7(U!mSTl"d{NORRJc0 I뷌TEHI"#u|ztH {Әv$p+ \9<:;K}u :}u_BDp0*BB1G|YS'y[ AzW-]pRrK/IQ%$T_qqL껌 OqvJm5wJ,MZC)v-7͝4wK& |2\q] A7؂NJ FFg]ո\fK:Z@hIr֗`?\^HSu+NЋ"j!p>[C 4+M &p#BgnknSö/G*ٶݺ}A\y'Ixbޯ_O|i~Sgn` 1E̿fcN>Pdy~ /$ypT #0dǃ( T{۬鹬z`ĉS$ uwJ:[>g]ZN]ыQ!#FIㄗ!\[Xɒ IJ6h# <]'ݳnj;  gn0/y3Q6&pP]T3w-y!}6x/0[n3 )&-M+=ug@$T΁__ܖ&'TpCݵ00c0̘q״&^aA--b=m4iYcb PύK<^ZakRC^3ʗEXnèq: /  7vԣR=gI3#Dc#-a]DD[_gn!\C~P\D.k8_t^hJoP1|c$ԅ-fwk'Vp;&pFNg`Bέ2'Uմ ѷ,M\:-%h$4?$^"&C9{1(M xhY,,կqRnZN˷auijbRiiVRD͖1aw[2^kvTOlhEp=@O'5_ VixdjS%s֐h F ӁťI:<ɼ+NҌ36 y x3-w0Pɞ ~8Iy}z]JJ͖1fXw>&Yʑ캏B6Yإp~'e:I+U@* +TV3억G;`|ڇ&5Un5[Lh;>xד_nݿ1g'+@>V̞>/!z~v#O\9:^Qǟp+NvU>ppSPXtyqtqc$Ӡ߇^hVүN93p3T?`1`d ۩Xx ?2`w"'/rrh'E^x텎1<fh4A+(%XHtA Q-?KP;PYhH=X毒OIa"uQ%RNI%Kz3rCE/jF.|%hx6D{Ŝ5,mqB>k&\V{rq1G,-Z !9*\O~"hiZMyէxP‡e~y8_$}+O!y׋4ܭB4_Vƭ*R@G61@uHA/hA>ƃגܠPwE܊yA_DEuT?+Nn !Q C5yvuU''#F. DP j * #ʮίZ\B:I7 s&%;[Yھl j7dЇ;_u[Ƙ抰V*5l 2T|CMCwxh Hywc"Q]~V!zY YnPibJ Z @ԌHLe1I24UސH GpHffsO%IO^F o7P'Cy𽟇w8AvhSmz#ϯozp6_h4 0 h6?{}J Iܙ~0 HODA9C MVH4UEO`*8Њedm3ױWJqN քSv^= B h?Tx2 j) J:*!Ri+]U +@W*L*2UԪ5o )dxGjhH=o^ ܊OR%7$*:Bt?GVST5(c;Y(~4ի;)-Y0I3o qZ?5U$S$xÈ"B430 (lL7}Z0E{P. +L`;o )0as!!r٢gb 8^.!̩\:RK0;dPeeG % /BIW T0QNû7S5J(.A )l>f*T1{aMQ ҧB"x541fQX+4 ;"8wTk ~2H$O~Hm'p9*ڲL8$f,N#:$i"qJ)}9F͘ I$\" \ 2% eJ̿{PYB1&HFJ9wqZU!2;\~Ov8zi1}Z|n[<shͫQoL{۟Ȋ>Y<mC(|"̀E4']&6o^«wyivs9>]3ZV!˾ͦ28 (Ie & ЈKo$l2ɫO#Db)>0Tuis#OF=h[7#LnMҥ,zd\2Fl}d)RyΕtva[Mr.aܼ ӱ Jˣ SA%MGX.C,΃ci? dGYh'i6]h.ik\e5aOv)XA˱q =]TՕ68{խ "']jL3U6"#my]i%(S4Cvj/'֢IW YWz 4 mCӽm(e@u[5abFBn% }F q.j_ۛ\%?]ϑh<jLe} $#?i#s- D̆HugM} $ J pBK37^$%$F߇h'2}@O_꽝&Yo[Bz˾=AD -l5,5{}j/~(k<ޯb Y ¹\P+8 ^uJŔVMSO,gloz.kj8mm,}^(úrZPZ״M}&#dF*KP}bja(E=i,LىU ݁i!FS-UىUAUvi'Īd-zAܝ&KKW*m[TY q"80*X u+Q Cf'&7x~Y7(~2[7D%TEdAVz9m mnp:A9aG8Xhnb4HɔA*E$#p)2h0BcbӚ,LQ~:2Ak$vQ>ʴaq`˶TکX3i@>5"wmS0iԙ+b;s09ͱWp>\.Ho8[F=J3TS?p Q,?0nC[a{e*j!74а $^nBæ;PUB okBW-/D7ŏLcTbv,F%"0LEbܗ?<"ᘌidHb4.>^ܑWfܑw:1bv0ҒݲKas6Bp-s_;\䊸[aOQU1"A!"`TBZ-i}bXn X$[ʝ1.N5wN&gѩKGPT6v:gowpSX*ѡaZ[/yrJA7qSVG.ye&!t#1F&rxt0#vX72ݕ嵖%o_2v8/)o_2`dRfg=ޤ3Ŀ}sWN7?R[zpzӘjdI&>5,-YW'iTP c_Ə_A-N"5dzhr wo|+-0ϖ4bXW/i.{ߎǻ8~]u|7߾='G)0JW_&B/HY&~\%[ @x%iUa%& ;?&{}춌'eqI;Imjyǩ*I /t% ߽[#U4VZ{i*FP%;~ÿڰ.ax`4zmxq2>dr` g GXV"q F W&HxDhy~[\TVljm`ErS\0=\&4@o`(GK5ǏxRwBld9 3:H(o4CpJ9Qwd֠-+H{tf9JM:?7 aYD $J {C_y6B;S#j,I'3 1jd\\_f_ly6- *W }0pX} vI[iF:RBѶfqv5ʋ&Dhݡr6lBw\|LI]'%W'%('/.YN4!k;yL2\J)_4J&45F &T-N*BEC)m|&*'N2,5z315 33g 'BЬncQKƩ)MσFqb=.QIj^q+$e0/aW9d,hCҮbqLoh-C݌cʪZ4SY Q+*+82숣0K24k2-Y D:2.<"J>3sQJ4@(b1&yCI W39I֍sS]#4k<ଆzB{XԪV=m-"vM .n."djuVp˦Jz#C 7m\x4g{VVt1RQ3^JN]2 VoGb.Fީ.ɳ{4@ve5Ujϣh{c"Oy֤hQ!Ť: $]*8b>A<&Vb 0i&vg@2 M]qF-n=.&z7zcD[Z +q,I[Q=fr5W]0Qk2 c15h^S`]2oh;6 -vIU'&, X9H7$έ@Yߣߤh- 2kekڵj-AbIj+UA"]ÉR􃴀12!el쥄j=mƃ$SK)I tQ6NA,O@ rXѡƒ;?6u&kS ۱c%G4 >Э}!r&+%?A`+z,ޫy^^{m]dSPB\Nӝ6h(,%qT>[P36MhW'ҜrHhUl|)<}16v;<PkNm*CMtH?O7zlFO,:C!{hݟJÃGNfՋLާ~ͭOgiO֬2o& '5tKI?6Ƥ4sȾ#mCP6 <(.RE^EG͓Z>]/kdf561"DWt7VG;F1ݟʺ!Ew?}M-gpk/ 0?wL_nno<oOf51yQEvM!瞲'<?_\Z6гg3ϭgV`-@ƙ)5lxuoN 64nV]k*mCkͱ};UȾyj_KrU]e㟳hO.nq Sxb YxHy?1V\}I1X1zЛs/(=F@wX]hYBצ[~ƴSO7&+J<ŧ/݋UZ&Z͒?# 0c_ʬE϶1`ކi/dgoBNyZw5ioM\yacjy峁7ϼ؍Y(Κl+˜"?k8k{đ&Z8CmktXj $Fq-uZ D-v)\@96Nf#\ƫs|ejGT3`xNeW,v@fBNY2٩k̎r*t Sc1**k0Og!*p""D%F[Â'۴B?VUIuG0~N̲ |k-/`ssFrV% ;RЧ m=H ced4V5bsn} {2k@w 5Δ*J(h v9St W&c%TVf>70; zGtFY;cAe-^kU|jP$u;iQ_>b/JT VbȬmpiەk^򽞾LpFY$o)9*F8ܷy?M(yͯ@.=2$=L,HKV C0I{|lyǙ_:8X<Ci"m7-7Nܜ S|&t+-dT'e8ԣkSR: g}YU=n_EZq \%pQE]%ЯW(5M-'хdk–]f[T.zc0] /}c_4rؗ, Jwl9 <:WMh,^FWB\ 3+,r2.Ǵt.JcJL9]M\j(Dr>hWM678)[o0 ;ծN*hE vXAJ~# ^Uqe4\՚ yU";].ȁ˷>؉OlűPm\ʯvkzKHl=>nNv$kpQvn҃gbP+VFĹv_w #"N7?z|{zI,jrJŨC$ d#M9ζѱ2ξܘ;$%^*z2H'CKx).^{3g#H3ϼ/YG) 7NllOM^khXsP [qf2f IO{c_yk]x 'ÝD;裶;so_8Zt '>?;GULH7F\8ʽxY{3rR G0"'rBx&\$""8TʑdL&e>xfਭ5f5fxkSɳ]o;۬/0u-|3ۗKO+fgΑ@1=:]9h̩DvӮ\DG[n#{;Q+϶QxKhdo=b7{^*A _A*&NhBmLL)^{|PJ=_?A(^ћ,P`-Kƪ΢4& (06(6:iEZ8c)Z)ED!cj@F)DŠBd::Tʰ(3~7e,QkΪ+Т'RfY(+EjےuPӗ/kŐzl`w4 NtdhW x]nPI> 3Fefu;]l;)1GZ^}}KUe?`XfѭeF{XS3[znhW:>vxtq嬰rYeZn3ϣnpg32~Aifϻq+7u -Ur91G/˓bmw”9oN|ȋ*U"/D^%_;1FbG}rɛ,șr Ũ!Psssı/U;qweλr|EC܌EpD.|XzbL: )b\` +ZQrpVHYk(A x4~p'hb#w ])u63h‘8oDY`1ƨ$~I jNGc*lN;YPNKպ2tAP:QPC$rd=D?#l m{w&cbpe+ Ն1fcj }edd[*lٓɈc]X_Lޥ\>H/A!BX1嗓Vbx1Ki#@=#' A>y^5;n ]nvc yw_m0mLm{py}çByww}'G0I>IP[y郳p4*VĖ]UU[Z NRtn]m'\]Gyww{$ƍ&}bpRMQmqbZ)6T[T j\iFKR @&2`ֹoto&AHWgH{/ZO<.w.~s5uMı+e3uZ566YB5trOD@[b:# /XJ\ax5Tw)Rr:؎Zkdsly0!LB2 'Ƥv` ڧk?=d%MKyl?}pnƞbi8 ZkE=X3d$ko9A<8Ns;5$kknEݳU$jv'SydJNSS&VF<ɩllMv:J<GHD苿&vL%.4ȥAui]X74W"«\r-w&y@22(ɁR6! 5dzR7<\re a @cH0mMTj4ndMZ8w;<`ܦRQi#fWE)=mr{v8#U91d;(]'E/G۳^kaѭo/o4/儝@0h'¶RZ~Y0lak˖_F2PLە_r?(i=0خTE54T+ ژo>y4🕯9\}%GPķsXORXÍYuo2ս=zC}!ʏ,SX5rKD4#>@JghL7sff.+g8K Fs+y,I>jLwu|R|REw=h< t[S"ިB4>"Oo ߴNiwPfFkmerO4 0&K./^\zwrFyqjdCZU5>`a n[)FM$ELBBʀ93x'NH/=:s4u,]:%ZT+:8"6]wj)zn<3 lS3`TK$[Gjk6HM{rTQH3Ms%c̫d%NQIC<$en;IEr߷B %Spok!{2'uX:#N(Gzr苏v\ɫ߽?c-bљ4 7Hy[l)l;{5)*@4'FMj38T-{ʇ-j@_zEwP2kEhH_NGK"[FΪ7lbLN04xE+ꭧmP @zqCEyԗXu*; p v('5E*y^gG4n S&xcr \1=aB!aBU1)Y}i@@́V4Ȍ5 ,#xE-fq.zS5姯nV^tqPSFW;H^"=c+9m]l=&1=lA2_A7:f)&5Uop9rRGɴ<ͯU&GvnZh2ݴm^jbN+!N b5@;"q.Tr. ~76b~]'aWyWh[ۋOvq9f GmJ^V2yHbi:ݸ]T+&da-ڢ"Ԅey.P1ЖӼ!I<ŏT:JgBesZvݎ.}tX^ҩQ61%#s%pUNySա-V{*DJul7J1`qLDj(Ķ)e @ۏ1|(V?ERv39 =ȶJȽC]u# L^ S M-\;DX2o - ) h`xuʩu-1SnJ(3Z_+-WrA+yP-!H*"Il*D yqK-Je"JY1>Z)vۜ ސ=RZ4*yre^]/ w"a#4 "j˝IaÞ܂oĝiڨfp8gC!$`#6YFQ3$D0!+W` iz.8tSWL%;⡬`ai+e}ͷ 2bN/V ^يG:iX5bBRMLU ij:v¹? sUlg4ġ#HlDtV.,OszY,@ze*zFNH[Un~L7@Q.Ӕbio͢X1FKL,er(ɋa$(,(XkK,R,%J_ ^seD|pƍ#Uf%4.WN栵ċL D9g b%?GE}4G 6)LѮMnj!`rʁKіӚSpP4m4堙R گ+g\1my)(RqT$Firc;PXW͊>M혈HWM}xV$V5wO9~BYqfi|غ5B3~WesGpLw!E:qNT`~9RHeIۼ<ܞzWOv:r1wxV$HfEFtBD6HF(4#!7R ]6ɴ8O`1Ȝ̑Dsdwj29l_D+پWحJm%:8$#^zQ=0;Eb pn!\Pxo5^5׋U!iS_/NFӣJxYKN) c(-iDGhqGOS}B81hz6PShttcp(䕌AW m " g-#2ngCC*LA@:Ip ۇI7Ppqۀ1%))ڍ~0#+lH T3 s&7"(LhLh,Y1} 2/8Gɳ"Y YyV39 V 3`YVmQ۵C,1#)苩W%MVZ\t޳ШVC@Y X+ [%n<9߱Rjh\@s{d= 6`j A;^,@JXq5'<:DY"I H7v-eO ^-WIWУOƿgz4k?!9qjD"!rۉ0~ыE0z&`/2#8oS^ze6[ 2O!/2ztr-;L(t;mwavUN=n%12F$>vc$]Nxs@Еm t'4m [x[hH֕-"LW `S`xR!*})3Yqy 'm"m~[M&k7v0&JW[D[VN aKEUNmk|K˹uP Nʩm4\=**Ԙl[)'Sra vRG(Su" Y-_,BٮZ)ژD!-v+{LWk'Ƽ ZޱRT*zc3]i.c\ v1+:TbDχmt|Nz(??~$sMg՗.уq:oiz:)ǁ%&ZbUD2(Tww㻗,DhǼV RVe*# <Ѕ"* ry5쬤׋T%Z6\1yhxfsH dWoG< Yy<'oeHSb3H6i3((P;uc9";L`*BSsLR8NS\l;%/S LR)y:CD0G9GxڂG`T8;|gxU8CTd]t#o7GÌj #j#35Yf:nu}1x!5IY%.׶݇ydCRNRot|z4g;f) 2W82Aӽ&Ƭ7x'ƈF5y[pfᆛe=PqiPXO ᶪ;nbdU*WgZj kI0hh4?= Jcx?OWK@U.}8iAN.s[2,:*{kp^JlA=/o@!~{g0l4\:KB8 ѮJ̑V5Y!Cexא'?uքɉ2jq9>T'hSH!($$w`Yrݠżhl~ iJ? gz_vh,o)"A&A^0x;̙seڲdINL<]UEUIx; e+eff9n#;V-ȭ˲Mp }"FDvBdd7z{dazXp츭 rE AYXm—6(|%-W9oprzt9+(Rukl{W:K:-cޛ|:39@4vB[Ӳsjlۡh oT#euLGPXܢ{tD׃6-'Itl~[TΈQmKo@Q#jsk9Qk"9LvQKk8KZ >XugplSv z g^6d5t=:%Wڂ dH4/s9-wck ×n+Dr e[ nQIvBX&+ɺ:Ce ^*-Bt`+%K8^GBtR0tk,0T ̏nvrSB(iAA/=)&hyVB*ri+ 4+}cFGγx XPqiNF,֌eFL`n.?&/(/=gO 24|W;<F9yl X& N]dt0j.|LL Fn}N h`7zFa3\5 leS+i)i) Y)Ov.%tOi̬9SdO7iO_3;4Rɞ.S5jdO KUv=ɞoO!Ӟpj&{Q{`86=l;LFx ꄢ|y1GENlim`Sc_zߗ^T3!pzZ0g-}5OJGlV: ^^g_ñq:9N>ysp}zYp*UZr)[*UZJ\Y1hyǥX~O|u>Lut4pۛYb/%+ge=-57ӘIdv;avjToxɦ'zlqӦ>' W*YXx>,p/"`c)dGRUPgGǁ͎/͎t.ըߥ |qrx^X`Q+*++}Y xX± ˆtXRP |;ܬ_zZ>KwփͿP :|֔F)z\8LY@F?P[M5o 6Cyta(㦛:Iێ?_֦(~Y-~we_OKz+D逜0|{vFê9D!/0b\H<s(xmGnI-Qr΁ry_n/.QIUK9;\wl\Km-MuS#U Ƞ1 {̦?(SbW%yT ֗WXRc& & -g+8 P Qt_|m<˵B]Z t1ε"3'_姮DC ["x^VR`E4XU1ue'9㼞"V[uȂhU׎j=mHg}CdfFlO>.€hbt^ ;bcROO;e2B\cd|.a&͜4rնh/tMzCݮ}ߧa910V3p^:u̗/VZ(pSjLWaX5J఩N󆻠X+ &y$/XI3 -A @YZ wTнuvfidt@g00硌$j&ȄMm"AN[|0:(D\{%sh("2IKEv+Am18^r9[*"ぁ4d,U)E!\_Og"l;iΗaWmvFtN'5QIZ!]V.mo4&0KSs4GQ1mU7dj9iά-nvMݻt?jE'R7V]=RHxlPki$c%J0j~g'uΗpɸl.)lԍ2Oۈ4 [u Š|I6UU%+KBgn,7UDw+ȕ^,kZ)͑tux P/W'S \5\O>f?S"i>D,no%u9]\.ޑ_6d::<$k>g/?2llq,"\͞~ .hG?u]V7)@RqY2~yAV6~U/Jtp_ Xa W%hgb+k@Ө@Mi MYG d#IV>e)ohu;G` hX$C7!*ƔtNL':lUQ{M"e^*--qc;(d@YYi%Ĭ^ gZ;;+7uqC:^|HRhʘ2Z>j#0%lܔHV:H'hxVgdέtٟW)U}}j7it]\?8xAt=p $3\U5SX5@Bȁ`䈗ioŧ> BY?T˷cpԼyAYUѴ@BaG9_ۭƨqG3!*@m./]3ƾìG"ݖ_ƃPbhsDg +6b1李ꫥEo̊Lp5j$%#eqNhM١NցjNqX(f>IO&B2zJQC ^[E\[/X4灝˂^$mLj43\Y> 쌺}*[CМYY6Yk(ԓ-ށ=(5NF.ݤd:9K >LKr S4^l9Ž*TOEwܠWJ %5n%緶Rr=Ml[) W]L9RP>Z2&ha@GZAihm54!jbWpRX\͋҂ DfԒs61CҢ$dG?cYٹFNvPҢj8F+AD!E+ZuF$^}F:\qw8~o2k~64rvY#ZG%QjӕRH^**>#zFCCbfxB(()64r ]9W_`9 7j@:?(_2cx5f 2FmO{,PhY2()̽kϜ w3|+w6 $`%5n_rsEu9b( 5c;j uy 7v_PbsĜPPg [k[(>ޖY=uccaAnjroHg8Ֆ7S(ZjKzC {^ғs7p{ZSZ%,/}هr. oc\{e,tþ.0w=ϋ}M^5l}cڮNp 80}c:=Kɔ{@تQ& H.0PO(U녳D[\LS)26A TLv{wv SS>pfA6`仕GLmqMyU;҅^t=LFZHqO*]geBYzy-4`f(fqnY]`YJ߿@MZ_zߟE|z7 ):wiyH#6a4I19cv"X~~`'qb'q)sp}zYT)9zHC>smp BI[q+I5GJ'}_uهzbΐaRge=a11g14v|i~dړvaFq"1=2h#K d *'T^YB/LyIJ-?͇ OYG 9]uV hobJZp^s1UFbTd)E*7t<3TS%ȓUT$AGi oLQt$Ϯ)+gWìQiVSLBڕ31PɑS4bZMQ{q)t#Թ v)[BW,fK8cNA6Rl!nXi1 $ֽtt O<~L]KNYmnڅ2QNɁ[j+ynZ?ސRwp}Z\vp'ϵZQ^N~Z>&УKS/1dIUxU^jHڍ4ͭ7%$3N{Uxq: 㝆ZA̮MYE{NR[;8/y{s|GOϟ+  ,+]8ɺRjDsd}.rq7(eM"B/rկoN5_лۊpB4)k]~9_Uz@҇t ;PUM6귃RK"C1( c+ E KJeڽ1!ѿHG^sZ53R u4 hP#DrHF떳-RBC?f%+\kک@ e53YwСO^OIV YZJ@%/.(\:J)ItUQac zu1\i{1c3%9Uh^ V^vqX (0#67*9Kxâo7N>Y'5:uPsg7ͪ5r|,yr0*3Kkb+(ݾY'Z/?_>gi&~%t7gz8_!r[u 91l%B_-²MER҈"9ӊI$]Uץ/U),<ŸMzsޘc˴RE"}lepDau,L*&6F%7Ó`J5u{%nn ;=R֣B'z;Yy.hc.zd< R,{IeGDN.xF/zK^s녕3cDYB))7:hADBc2JLN"Z>"qЖ72\$Wrt^f/Y>8*.@ӺsSj}^-B;&9⛴НxmQpd*t{sC K1DD*-$!$rh1R,Ҵ΁!xٹ5AXYW'38bR$lIݜ{Ȁ 8ך41ʐd#F$u3w1wI-d$4. dsΖJ|eVN諓ɹ|ztW;~w-]9wۻO$-pSf~dOפWQz_Md݊uT̐bzZHNdtnw65W? Z)mg#"V,*dk¯V UVN4~l悷B0԰uC)զփ'3nj~wm> =q2snnT0’B|I uN$IHrN\5"17znZ8xQ̳SbioƲ 6ydD! d[3hfJ{:W>kC(TU>Ke1tnq#S]@.Kj, 49҉1H9.'Q*%fH4!]eSao{tWS7H-Y;]i5[r؝=h؝yQ-S1ݙ,%3 pCZP"QGJX&NdIyly6)^% FBHց ||w $Q~BfS1p-x1KgC,Ș` _$MFH|?gI DO0xۘC)l':-2$G ^yk$H#R0u`jkȮKzRR}"#W{}/ep橾OD)pWR}Y(8W 'pL[a/"5jsfC LeYI0hȶ);IhF @Y&&grVD6i791~aVFhu3m|W8Ƥإ@q0t&i$.Vdžt|uvIzTeFQěR⊁TrF$G RLd2w^,~_Z! +8>9*HjuaOS?)IvON6 lyE1@}1 b,Tgg jgkfӳwkjzۨP]Oi]f3:Y+8V.Kz:|K7l8P@f}.Ǜ>:hd Lb<-gsC8?$>gX@O|h~ô4ˍدs"}si+ͳZ;T8^cKi9413f![*&~ &#htz?Pmkc+Cgù^nv)\k!EOJ\@ɷE}WwqKOvLr;1ŪF?;9DﹲcD%oDw;s/ضkփ{e_ꫣx~a?D]$E6Ȅ_V 718spPQ(w2m)p:Fe2'-`7Vjd&YXdo$VOl?jE|uqV%l'%Qan. 48:"rQ~?5$sBzUiW\XӫSԯ]4"v4||~}p0y(0'*_2"xw(cfuު4:]:oz% FPN/W{H/RYlcB$IA+\bR2@0K?\_; " _?Xz5G Ӌm b;5@-WgxtLo,B1xnuԆQ_\}r9ʳOxڽIuX${-6^]o#7Wwkd9~;l|ãJv"[zlCO$˭fWbUX ?V`VM`ceqE_}v%iB,,пZܝ5MP'm0iZˋbOljvr2̗?rOvB/(իm@sezwN.u)8#ϙ}V֊ sZ L9%g =f;b!uu?gv-`}s*ALy$3j˼z!;$3:)[t9?QN&WQ/9 e:r@0 = h1;bP? 9cz w?|O&9ղ^g+`f@j8G?判ȗsXTZ!y>鉁xAv:w>돥R x'gq) ;c OEHVr,Rnɇ`SJh6R=mOE.~iy"ymY^jWDx;xu| V]Uɡ+=^jK 9Ht}uDy-zhU4X5`q0KRxtюJKJ!Z ^%bpV`$jxj֗PFlAK]Z C|[Za,r4{h@qDZSYޑ5 {ھ.q X&pn8iY]74y@uPuo DS!) e*~?[b2_U;+Ă#лX6],T_vZB,@0hMTR()Bu MM@t>5^Zky)1r[Mq>ը2J]Q/x@wJ9)h,i !Y" ՗Ģ5-1nMT+CoYF"@N='"bDF;jC`XP}V-^FY+CZbMt"D7iq*eJf)YDD6(e ܆w2e &&DIx㻫JfO^E'}SΆ/|'sbLF f-T4j0nSQ-5F+Bږ ZOiEtѨBd2jOZ1 GoZȪ1 9yc-FThZ*r$С {wXI+[S@XOwk]L H]G1f qA01Zo6 0QޘӸpI)Qu +h @0ƝJ|"Ƥ-ޘ0p 3/)YD*W2)Z<Ĺ,Ei ,dl8]7lO|lNHTjmA.b6LBKu+? >(5wU!?T'ZTޏYҭ~wя᭨DJҾ Hi/U\L/oo5jg f hXд,QQU*pPBm칇_2r^l\RQ}Qt2?1{8yFGn$237iHjA^iɏ3C^b}8HK=P$Yu]G8!ߜUP)!{4sV,+Sɔy_2oTByL 5"{8|$m=väQ&|+7{.,%Х 9T)Gy̵fL9P%^bqKR898bOJ%)i(Vn JAL2Y) =D)BwRHZSКhS $U%rV4@G?ٻň~?j4ܚtW9j62փQgc4 W(z|cCnD}tB'I=Nz$8Ǻ lt(J3h!+L6P i|c}d^]4Q>*VvaƘUA%V05h3:9uE֖} ›Ԧ3=8!4b7*Ap'.IkԜX}p00,6hk.u嗎L\RPQ+[kF7heBU_ZP Z͛+F!L5L ۯT}] Ds#fPYL#f*JHzC3{3tT&Tj[.G`Lަ^QvFz~~C(S pFk;o7C>=|/6%vux[Glұ3;Py뛏?۫{|GqOBt5\*GMvtkW/qg ݡgˮ[=I=ARdj1Jv& nF7C{D?܂ (פSB^ݹ 0'ؖjYWmݒMҟObe<9Q-uSP^oqs}[/B*Žt"ο Gp e(M!)|px|QzN&2`Z7&UJm.A `Bc"3EE7gjߔ㶠joMMKoe+2@D::Y1 JUjUԘ _ e]iĵtsjo{&k1L~: 4%N jE"yrb81q?ж]רM7CghԸBL<Ŵ嬷LD=R%$OyT*v>=:jC7|@>{;.{kkha ?KVWnLRxw_DSR+1UMQ,<2KN^t}I0kלjiaé,0PboGdW:ya ث[~YfDSJ{`${=IٽbNtKa\JkӚ#8Щ+F`"Ol9PD̖5\۵4`:FnkLz$,Ggeekdz$ a8Wx-iM^Hg !&g(EYD4t !U ʘQ1 }y!M!ܼΥ~u XP^?,˳fCbv.U.Ij\d=K$\FJ6TXd`y&8| 8aeh*b_@uma@zvONK]@}|O] R/n }7~w?E$XT;`+'K%X+ W&gv4~4u 4jF ->QEMe5!adˋS=F{2@ VIڸoYiJu&^hawnj kTGN]TLXRj^sFt;6 LfMύq_,":zPiAő OUj^ ̨zHUVOf) F"?'Z$lHj:$J`S hU}|zu缭V=ZYJ#ꉣH^|xp~9v>B-;%m|/3w<:8+wp FRn#_:nSs 63ȉy# M? / / O3zҊmxYzB ~QŬw\NbIT&* {9D;mfb("쵓NS $ ~8;S̖ ;>ZmLo<.0~ QF+ ua&t<2h]czJz_1rWy>8Crj^/޵ӱcoI/Gc[NΉ$J@<$CِDܠK94f#/x8 ?2NN.d;<1~VHdP]^^kkfBQa`̊ˊdC1iqZN>O%{NdEVdM*.[N3P S@ T,TDתƯm!g|{7(ԟ9Qz/ E&5eR-مV ;R?Yw0itx:_6SVpslZ/ Zc ._Ux`ߑӨ6'̾e/ke+MB"Y"NWՏ#LDkꇑ;G-ўJus]{PCV͵^n+ bjME![ |z[ S;zz=wtd*$k-*`em ֧E~%$**9J $-&[7Y,eLuyߗy], T F6*0AI͆tT0OHbiA,uƒF6$oLbm?|4$N>?Ğk6IFMI!ܬS|1v087f {\]IJNoϪ.ihz% 󪋩'/?ϫrc6U1MA23#j %(b 7P#gg0cf{;El92.6 l}T5kS .=#OxȐ#55xGuauCXHɛ`lh:Ć:jcn'D~!Vm8C jtA13h'MF(TK:\%bjd@5>Q͌y#8fo<_9TN=qL?PFrco<i`9#_ N<L罞68V4,hc⽍q^}nNZ9Z](BRmYB9UURU.6]GKtus=w6gx}rTN2_}N0EoUnzTz_>ypVug7窬v#yȾ*;; UYFR/D輶Gԫ碉kͼy uA#i iŵ,n7x;Hnow#iehd;0=  9zyA!rhݧN~E_ YK6U(B%물1ƪo2:_A9c}vjbDY߫nK7 Y =0={eFN̴1-LnAfM) OjΤXJED=]BӦ4(8.Q>v C#O-]A1ȵ&(䨡|(I$lӧN*c.m쳰5Vk[$flmk_&ֶ} ̃LOvnn F ܋b~$p'yy\H<=B]8]e.x\uQoˮNU&iT6-[?blp$ 'KOmg i0aɸ9a?) ގA!“gS,aw!Hc|eTYA^ESDbLnN\?f$tyXL_rp߷VRh?.ޞk9jl`*&۔Yc;.Od@{P>%lDˏ@L@V €Ő&w~.bV>}6·#vuz`eꁕBA+>by.xf$D;ikbu;w>A7~>$ռjd 8x {ЊwwSc+|<`t0fXh"!!P']9{g͋?&e,L?2ض]pf<M%(xbNOs'rǪEy%q_xPQ `n:s5ޫAϩ&?)XrjOv. {U %*"P.@x@ۢ`vO>ӫ+qPݼz9&S" EfqrwF<{ky ^gaI%8KoZP&ܒ8X'D'^3|AA>*$,ga.~ mAfq#P{S\xWF/&gn/Z~h/HV 0Qd"8e#d?2,[#FzX;b)B䑯ciR>Vpo+pt 7_qmo>Fy' p輝u8Ǭ5]ȯV.~z AQM-3/$/u4bcٲc Ȝ{'IcJHl@i߸u5ː}!|;1F[kJz-q@;xz~Kl3bި/]UXW6osOoB|ik=tx;ߕ?8[aU{y͛6:&Jd{96Sny46R˔w'm,90FOB+mK.XO%JKE^g*@О^˦)C%Y>#K0ݰ`v`a./;ABZlSZv?ؚf %#r ѥ쐵Omv]\stSH0 *%QSLxs1^3U"c .?B« faGe_"DdDO/XCɂj`l|&y X~ gFQbUc QEp IXc@Ձ^WAYw؅@ f Ѽ.A vqH^yWM$`c1MbHkRiJBԊ(An%` zliͱE%QPPH>48I~^A8d*x_"m"Xс Pb,sut0ɾ@rdrkBHA9 xʞnW,]AeP'J"(9NVd݁ bٙ}F)@ץ1^3Rf .ndaᯎxA/LO i;Х `ld-*D ڢԤunz-^gc&ӠB2g. ?vxŻ'W?|j5ZVq}oᵆ+X.07[WCTP [mwk,:x;U$Q>q,L-ۑxLk0v'@Ljy,x̟̫SL*&ZIEESoj[w#E-oJ:sU몳5Hk +z?[q~/ͲGH /[;Z=9K?ɂ qa kD'"&]ـ ԏHM1āKGWufl%YL;b"9LXg1=Ÿ43x3Isof0,{oswݽpkG.N7pw/:uc{ֹ`v]'hxq+V$IhQ<㝖gKZeðhŘex_w>gH> ܽJ_ԋ5Jt}o<~!yg=2CM86hV+ >՗հv~I܋tEb0'O1ȁ~O[͟yx{NA$խ mT٠}y,@Xn[L۳ žfq0q_8oe;0Ҳ;խq'!2l~:A럿 ~E<bЪ5ͪ]κ7 q77L-YGm0?e]Yo9~_K0ia3]9f^2DձYcEvK%RdL+.Pǀ*$6*wٷJ9U6Xٹt2d 6WBlLÑmh+_i=iT>?mA0 $X '"$%*Te;ӈ(WkHjlT2Km(*Z ª'"yXLk6J.vhst#r:)x;цS*sbM^ŁT Ip`1cu..Z6ZT=c;}s_.E 2nig~/c3î3}~]뻎w (`[ 7VKPYBU4& lrc &n5<].z<е^@QQ%b2]cG3jnj_jA^x݁8>CLqb[-^ B.COD[hcTqnDf9͘ZK N!з*2bVYmka S`Bq уBpJ 6SͬM@#cPyev5)3# D5k=լlu7܃I]u/[G٬P[ݵЕۛy~,&d\_͛u*Rpw*Qݎ.;rgg0l\%|1|c}~L K@h&nngaɗߎn/ %v tӚN]rV>̙JN U-nׇo rA=u ,.=iث׏+!q6-dB^r|Q݆t4G |$' !)\X.-cI*|r.$PK֋ӑ@ZBkA׳I5|Yi⿰|i٪vi esǟ;Q.OΝn2H4b05m1 :S JqNh5*\ o"7LUx].38 w-?n~shnm"=Cȷv ˁgfi"12NdRZTZ?b] .~X2O/]qwckWȤ2O R${kH1S >NwZ z^}9|Rnp 2A]K~`Ó?9N H8쑀,`O9,?vefoGӻ8끡u,7:ܙ5)j$mCm_WAxox!XGF5>d|56XԎ?lZl=YKyfwX΢7<6M,zן} m֠%*zI-PT*Y[^V[bdEuW-8,ϓʺ[Ofy+ ZO9\R78h}p2cF4_9ەCHT( ![<ҽ7pv*AXPcu&Vp37v>n=iV*wi*Tu\/}a·_z5Y&+ƫyqT1[-VRa0Ol-, vX%UbV].`RjUhK('BRݺtògqT)A~|r(ki+Q{ n1ʸdeIyxKńaG^O4T68NK192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.315811 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46118->192.168.126.11:17697: read: connection reset by peer" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.316298 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.316385 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.321808 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.658152959 +0000 UTC m=+0.933649899,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.329065 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.65818799 +0000 UTC m=+0.933684930,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.337818 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d4669f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.658202231 +0000 UTC m=+0.933699171,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.344638 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.6600935 +0000 UTC m=+0.935590430,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.351667 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.660124551 +0000 UTC m=+0.935621481,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.358197 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d4669f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.660136881 +0000 UTC m=+0.935633811,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.365695 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.660632374 +0000 UTC m=+0.936129334,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.373978 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.660670095 +0000 UTC m=+0.936167065,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.380119 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d4669f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.660688165 +0000 UTC m=+0.936185125,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.387010 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.661401454 +0000 UTC m=+0.936898394,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.396039 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.661428294 +0000 UTC m=+0.936925234,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.402405 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d4669f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.661442195 +0000 UTC m=+0.936939145,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.408502 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.661930607 +0000 UTC m=+0.937427577,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.413910 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.661969658 +0000 UTC m=+0.937466628,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.422161 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d4669f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.661991359 +0000 UTC m=+0.937488329,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.430883 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.662980835 +0000 UTC m=+0.938477785,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.438768 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.663010016 +0000 UTC m=+0.938506956,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.443786 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d4669f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.663026216 +0000 UTC m=+0.938523156,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.448479 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.663151439 +0000 UTC m=+0.938648369,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.457656 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.457599 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.66317168 +0000 UTC m=+0.938668610,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.463519 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1890457046390c1d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.073402397 +0000 UTC m=+1.348899347,LastTimestamp:2026-02-02 00:10:02.073402397 +0000 UTC m=+1.348899347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.470764 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18904570463b43e0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.073547744 +0000 UTC m=+1.349044714,LastTimestamp:2026-02-02 00:10:02.073547744 +0000 UTC m=+1.349044714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.476199 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1890457047da5e2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.100751916 +0000 UTC m=+1.376248846,LastTimestamp:2026-02-02 00:10:02.100751916 +0000 UTC m=+1.376248846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.480942 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570484f1797 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.108401559 +0000 UTC m=+1.383898519,LastTimestamp:2026-02-02 00:10:02.108401559 +0000 UTC m=+1.383898519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.485568 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1890457048dc505c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.117656668 +0000 UTC m=+1.393153638,LastTimestamp:2026-02-02 00:10:02.117656668 +0000 UTC m=+1.393153638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.493425 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1890457070a3a83d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.785032253 +0000 UTC m=+2.060529193,LastTimestamp:2026-02-02 00:10:02.785032253 +0000 UTC m=+2.060529193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.498981 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457070a4c1db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.785104347 +0000 UTC m=+2.060601277,LastTimestamp:2026-02-02 00:10:02.785104347 +0000 UTC m=+2.060601277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.503164 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570711fbf73 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.793164659 +0000 UTC m=+2.068661599,LastTimestamp:2026-02-02 00:10:02.793164659 +0000 UTC m=+2.068661599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.508208 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457071960844 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.800916548 +0000 UTC m=+2.076413478,LastTimestamp:2026-02-02 00:10:02.800916548 +0000 UTC m=+2.076413478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.514617 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1890457071ec8975 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.806585717 +0000 UTC m=+2.082082647,LastTimestamp:2026-02-02 00:10:02.806585717 +0000 UTC m=+2.082082647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.519679 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1890457071fcb79b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.807646107 +0000 UTC m=+2.083143037,LastTimestamp:2026-02-02 00:10:02.807646107 +0000 UTC m=+2.083143037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.527371 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189045707221f179 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.810085753 +0000 UTC m=+2.085582683,LastTimestamp:2026-02-02 00:10:02.810085753 +0000 UTC m=+2.085582683,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.531562 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045707239f9ea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.811660778 +0000 UTC m=+2.087157708,LastTimestamp:2026-02-02 00:10:02.811660778 +0000 UTC m=+2.087157708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.539301 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189045707242224b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.812195403 +0000 UTC m=+2.087692333,LastTimestamp:2026-02-02 00:10:02.812195403 +0000 UTC m=+2.087692333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.548472 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045707351a0ee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.829988078 +0000 UTC m=+2.105485008,LastTimestamp:2026-02-02 00:10:02.829988078 +0000 UTC m=+2.105485008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.555638 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570865a4417 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.149321239 +0000 UTC m=+2.424818209,LastTimestamp:2026-02-02 00:10:03.149321239 +0000 UTC m=+2.424818209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.562128 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570872d57ef openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.163154415 +0000 UTC m=+2.438651355,LastTimestamp:2026-02-02 00:10:03.163154415 +0000 UTC m=+2.438651355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.570714 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570874cda10 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.165219344 +0000 UTC m=+2.440716334,LastTimestamp:2026-02-02 00:10:03.165219344 +0000 UTC m=+2.440716334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.576858 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18904570928b817a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.35387481 +0000 UTC m=+2.629371780,LastTimestamp:2026-02-02 00:10:03.35387481 +0000 UTC m=+2.629371780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.588122 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570a0c9e4f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.592844536 +0000 UTC m=+2.868341476,LastTimestamp:2026-02-02 00:10:03.592844536 +0000 UTC m=+2.868341476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.594424 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18904570a11d486d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.598309485 +0000 UTC m=+2.873806425,LastTimestamp:2026-02-02 00:10:03.598309485 +0000 UTC m=+2.873806425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.602117 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18904570a1ae24a6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.607803046 +0000 UTC m=+2.883299986,LastTimestamp:2026-02-02 00:10:03.607803046 +0000 UTC m=+2.883299986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.606846 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570a1f45abe openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.612404414 +0000 UTC m=+2.887901354,LastTimestamp:2026-02-02 00:10:03.612404414 +0000 UTC m=+2.887901354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.611763 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18904570c3666731 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.173526833 +0000 UTC m=+3.449023793,LastTimestamp:2026-02-02 00:10:04.173526833 +0000 UTC m=+3.449023793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.618663 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570c3ecdb91 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.182338449 +0000 UTC m=+3.457835419,LastTimestamp:2026-02-02 00:10:04.182338449 +0000 UTC m=+3.457835419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.626510 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.626812 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.626836 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570c3ed92e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.182385381 +0000 UTC m=+3.457882311,LastTimestamp:2026-02-02 00:10:04.182385381 +0000 UTC m=+3.457882311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.627897 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.627981 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.628006 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.628666 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.632862 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.633747 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570c3f206b1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.182677169 +0000 UTC m=+3.458174099,LastTimestamp:2026-02-02 00:10:04.182677169 +0000 UTC m=+3.458174099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.641115 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18904570c40bee67 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.184374887 +0000 UTC m=+3.459871857,LastTimestamp:2026-02-02 00:10:04.184374887 +0000 UTC m=+3.459871857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.646556 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18904570cb7b55f5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.309116405 +0000 UTC m=+3.584613335,LastTimestamp:2026-02-02 00:10:04.309116405 +0000 UTC m=+3.584613335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.653261 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570cbcb66be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.314363582 +0000 UTC m=+3.589860522,LastTimestamp:2026-02-02 00:10:04.314363582 +0000 UTC m=+3.589860522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.661800 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570cbd2251e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.314805534 +0000 UTC m=+3.590302454,LastTimestamp:2026-02-02 00:10:04.314805534 +0000 UTC m=+3.590302454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.669462 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570cbd52a58 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.31500348 +0000 UTC m=+3.590500420,LastTimestamp:2026-02-02 00:10:04.31500348 +0000 UTC m=+3.590500420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.675885 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570cbde22d0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.315591376 +0000 UTC m=+3.591088346,LastTimestamp:2026-02-02 00:10:04.315591376 +0000 UTC m=+3.591088346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.681810 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570cbdf5f3a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.315672378 +0000 UTC m=+3.591169318,LastTimestamp:2026-02-02 00:10:04.315672378 +0000 UTC m=+3.591169318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.686842 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570cbe09c26 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.31575351 +0000 UTC m=+3.591250440,LastTimestamp:2026-02-02 00:10:04.31575351 +0000 UTC m=+3.591250440,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.692393 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570db057576 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.569826678 +0000 UTC m=+3.845323608,LastTimestamp:2026-02-02 00:10:04.569826678 +0000 UTC m=+3.845323608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.697804 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570db26256e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.571968878 +0000 UTC m=+3.847465808,LastTimestamp:2026-02-02 00:10:04.571968878 +0000 UTC m=+3.847465808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.704020 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570db2fa13b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.572590395 +0000 UTC m=+3.848087325,LastTimestamp:2026-02-02 00:10:04.572590395 +0000 UTC m=+3.848087325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.707818 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.709656 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3087a7daace8c6ad8a6d2570530f65d5e7ee3065879cb91a75a26f38ff7a8f52" exitCode=255 Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.709759 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"3087a7daace8c6ad8a6d2570530f65d5e7ee3065879cb91a75a26f38ff7a8f52"} Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.709903 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.709995 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.710657 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.710696 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.710743 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.710827 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.710846 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.710787 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.711533 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.711546 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.712035 5108 scope.go:117] "RemoveContainer" containerID="3087a7daace8c6ad8a6d2570530f65d5e7ee3065879cb91a75a26f38ff7a8f52" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.713342 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570dbd3ca1e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.583348766 +0000 UTC m=+3.858845696,LastTimestamp:2026-02-02 00:10:04.583348766 +0000 UTC m=+3.858845696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.719756 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570dbe15639 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.584236601 +0000 UTC m=+3.859733531,LastTimestamp:2026-02-02 00:10:04.584236601 +0000 UTC m=+3.859733531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.725791 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570dc355a3f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.589742655 +0000 UTC m=+3.865239585,LastTimestamp:2026-02-02 00:10:04.589742655 +0000 UTC m=+3.865239585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.731595 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570dc38aabf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.589959871 +0000 UTC m=+3.865456791,LastTimestamp:2026-02-02 00:10:04.589959871 +0000 UTC m=+3.865456791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.738599 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570dc498048 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.591063112 +0000 UTC m=+3.866560042,LastTimestamp:2026-02-02 00:10:04.591063112 +0000 UTC m=+3.866560042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.746298 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18904570e47d61d9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.728680921 +0000 UTC m=+4.004177851,LastTimestamp:2026-02-02 00:10:04.728680921 +0000 UTC m=+4.004177851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.751980 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570e83df3ac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.791632812 +0000 UTC m=+4.067129742,LastTimestamp:2026-02-02 00:10:04.791632812 +0000 UTC m=+4.067129742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.757819 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570e862eb1a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.79405545 +0000 UTC m=+4.069552380,LastTimestamp:2026-02-02 00:10:04.79405545 +0000 UTC m=+4.069552380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.764741 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570e9153190 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.805738896 +0000 UTC m=+4.081235826,LastTimestamp:2026-02-02 00:10:04.805738896 +0000 UTC m=+4.081235826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.769181 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570e92a94e0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.807140576 +0000 UTC m=+4.082637506,LastTimestamp:2026-02-02 00:10:04.807140576 +0000 UTC m=+4.082637506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.780457 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570e9326f3e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.80765523 +0000 UTC m=+4.083152160,LastTimestamp:2026-02-02 00:10:04.80765523 +0000 UTC m=+4.083152160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.784787 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570f58e0054 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.01498274 +0000 UTC m=+4.290479670,LastTimestamp:2026-02-02 00:10:05.01498274 +0000 UTC m=+4.290479670,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.789277 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570f683012f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.031039279 +0000 UTC m=+4.306536209,LastTimestamp:2026-02-02 00:10:05.031039279 +0000 UTC m=+4.306536209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.796421 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570f691e318 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.032014616 +0000 UTC m=+4.307511556,LastTimestamp:2026-02-02 00:10:05.032014616 +0000 UTC m=+4.307511556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.804394 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1890457102b460df openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.235601631 +0000 UTC m=+4.511098561,LastTimestamp:2026-02-02 00:10:05.235601631 +0000 UTC m=+4.511098561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.809139 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045710387965a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.249443418 +0000 UTC m=+4.524940348,LastTimestamp:2026-02-02 00:10:05.249443418 +0000 UTC m=+4.524940348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.812199 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045711add2f87 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.640929159 +0000 UTC m=+4.916426089,LastTimestamp:2026-02-02 00:10:05.640929159 +0000 UTC m=+4.916426089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.818398 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045712a082270 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.895402096 +0000 UTC m=+5.170899036,LastTimestamp:2026-02-02 00:10:05.895402096 +0000 UTC m=+5.170899036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.823306 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045712b033e6d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.911858797 +0000 UTC m=+5.187355727,LastTimestamp:2026-02-02 00:10:05.911858797 +0000 UTC m=+5.187355727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.830511 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457157009ed3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:06.649884371 +0000 UTC m=+5.925381331,LastTimestamp:2026-02-02 00:10:06.649884371 +0000 UTC m=+5.925381331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.835151 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457166f7d399 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:06.917743513 +0000 UTC m=+6.193240453,LastTimestamp:2026-02-02 00:10:06.917743513 +0000 UTC m=+6.193240453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.843748 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457167e76251 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:06.933443153 +0000 UTC m=+6.208940093,LastTimestamp:2026-02-02 00:10:06.933443153 +0000 UTC m=+6.208940093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.852765 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045716801e724 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:06.935181092 +0000 UTC m=+6.210678062,LastTimestamp:2026-02-02 00:10:06.935181092 +0000 UTC m=+6.210678062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.862364 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045717850d4b6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.208789174 +0000 UTC m=+6.484286144,LastTimestamp:2026-02-02 00:10:07.208789174 +0000 UTC m=+6.484286144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.869910 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457179698997 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.227185559 +0000 UTC m=+6.502682519,LastTimestamp:2026-02-02 00:10:07.227185559 +0000 UTC m=+6.502682519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.877856 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045717981089a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.228725402 +0000 UTC m=+6.504222342,LastTimestamp:2026-02-02 00:10:07.228725402 +0000 UTC m=+6.504222342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.884356 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457189184941 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.490296129 +0000 UTC m=+6.765793109,LastTimestamp:2026-02-02 00:10:07.490296129 +0000 UTC m=+6.765793109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.891425 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045718a6772cc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.512261324 +0000 UTC m=+6.787758294,LastTimestamp:2026-02-02 00:10:07.512261324 +0000 UTC m=+6.787758294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.898977 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045718a7e1299 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.513744025 +0000 UTC m=+6.789240965,LastTimestamp:2026-02-02 00:10:07.513744025 +0000 UTC m=+6.789240965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.905680 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045719b0e4149 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.791628617 +0000 UTC m=+7.067125567,LastTimestamp:2026-02-02 00:10:07.791628617 +0000 UTC m=+7.067125567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.912196 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045719c21f473 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.809696883 +0000 UTC m=+7.085193843,LastTimestamp:2026-02-02 00:10:07.809696883 +0000 UTC m=+7.085193843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.914390 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045719c469bd7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.812099031 +0000 UTC m=+7.087595971,LastTimestamp:2026-02-02 00:10:07.812099031 +0000 UTC m=+7.087595971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.918528 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18904571adaeac91 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:08.104131729 +0000 UTC m=+7.379628699,LastTimestamp:2026-02-02 00:10:08.104131729 +0000 UTC m=+7.379628699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.920912 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18904571aedb7dac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:08.12384606 +0000 UTC m=+7.399343030,LastTimestamp:2026-02-02 00:10:08.12384606 +0000 UTC m=+7.399343030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.924219 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 02 00:10:22 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-controller-manager-crc.18904572ceed8866 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Feb 02 00:10:22 crc kubenswrapper[5108]: body: Feb 02 00:10:22 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:12.956866662 +0000 UTC m=+12.232363622,LastTimestamp:2026-02-02 00:10:12.956866662 +0000 UTC m=+12.232363622,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 02 00:10:22 crc kubenswrapper[5108]: > Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.928552 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904572ceef42e9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:12.956979945 +0000 UTC m=+12.232476905,LastTimestamp:2026-02-02 00:10:12.956979945 +0000 UTC m=+12.232476905,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.934426 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 02 00:10:22 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.1890457375a463fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 02 00:10:22 crc kubenswrapper[5108]: body: Feb 02 00:10:22 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:15.753868282 +0000 UTC m=+15.029365212,LastTimestamp:2026-02-02 00:10:15.753868282 +0000 UTC m=+15.029365212,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 02 00:10:22 crc kubenswrapper[5108]: > Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.938335 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1890457375a569f0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:15.753935344 +0000 UTC m=+15.029432284,LastTimestamp:2026-02-02 00:10:15.753935344 +0000 UTC m=+15.029432284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.943752 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 02 00:10:22 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18904573ab3c335f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 02 00:10:22 crc kubenswrapper[5108]: body: Feb 02 00:10:22 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:16.653009759 +0000 UTC m=+15.928506699,LastTimestamp:2026-02-02 00:10:16.653009759 +0000 UTC m=+15.928506699,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 02 00:10:22 crc kubenswrapper[5108]: > Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.951605 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904573ab3d0574 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:16.65306354 +0000 UTC m=+15.928560480,LastTimestamp:2026-02-02 00:10:16.65306354 +0000 UTC m=+15.928560480,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.959607 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 02 00:10:22 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18904573d011d060 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 02 00:10:22 crc kubenswrapper[5108]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 00:10:22 crc kubenswrapper[5108]: Feb 02 00:10:22 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:17.270988896 +0000 UTC m=+16.546485836,LastTimestamp:2026-02-02 00:10:17.270988896 +0000 UTC m=+16.546485836,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 02 00:10:22 crc kubenswrapper[5108]: > Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.967707 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904573d0128dd5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:17.271037397 +0000 UTC m=+16.546534337,LastTimestamp:2026-02-02 00:10:17.271037397 +0000 UTC m=+16.546534337,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.977228 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 02 00:10:22 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18904574fcc2c49d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:46118->192.168.126.11:17697: read: connection reset by peer Feb 02 00:10:22 crc kubenswrapper[5108]: body: Feb 02 00:10:22 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:22.315750557 +0000 UTC m=+21.591247517,LastTimestamp:2026-02-02 00:10:22.315750557 +0000 UTC m=+21.591247517,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 02 00:10:22 crc kubenswrapper[5108]: > Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.987281 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904574fcc44f78 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46118->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:22.31585164 +0000 UTC m=+21.591348600,LastTimestamp:2026-02-02 00:10:22.31585164 +0000 UTC m=+21.591348600,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.991792 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 02 00:10:22 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18904574fccbf587 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 02 00:10:22 crc kubenswrapper[5108]: body: Feb 02 00:10:22 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:22.316352903 +0000 UTC m=+21.591849843,LastTimestamp:2026-02-02 00:10:22.316352903 +0000 UTC m=+21.591849843,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 02 00:10:22 crc kubenswrapper[5108]: > Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.996641 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904574fccce39f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:22.316413855 +0000 UTC m=+21.591910795,LastTimestamp:2026-02-02 00:10:22.316413855 +0000 UTC m=+21.591910795,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:23 crc kubenswrapper[5108]: E0202 00:10:23.001219 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18904570f691e318\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570f691e318 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.032014616 +0000 UTC m=+4.307511556,LastTimestamp:2026-02-02 00:10:22.713465394 +0000 UTC m=+21.988962324,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:23 crc kubenswrapper[5108]: E0202 00:10:23.006225 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1890457102b460df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1890457102b460df openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.235601631 +0000 UTC m=+4.511098561,LastTimestamp:2026-02-02 00:10:22.983679855 +0000 UTC m=+22.259176785,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:23 crc kubenswrapper[5108]: E0202 00:10:23.013705 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045710387965a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045710387965a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.249443418 +0000 UTC m=+4.524940348,LastTimestamp:2026-02-02 00:10:23.002223606 +0000 UTC m=+22.277720576,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.452603 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.678048 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.678393 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.680011 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.680074 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.680096 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:23 crc kubenswrapper[5108]: E0202 00:10:23.680843 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.697660 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.716474 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.719627 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.719963 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22"} Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720175 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720846 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720889 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720903 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720917 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720940 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720954 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:23 crc kubenswrapper[5108]: E0202 00:10:23.721468 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:23 crc kubenswrapper[5108]: E0202 00:10:23.721695 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:24 crc kubenswrapper[5108]: E0202 00:10:24.106170 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.450506 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.724534 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.725784 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.727982 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22" exitCode=255 Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.728041 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22"} Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.728089 5108 scope.go:117] "RemoveContainer" containerID="3087a7daace8c6ad8a6d2570530f65d5e7ee3065879cb91a75a26f38ff7a8f52" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.728328 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.729320 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.729364 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.729383 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:24 crc kubenswrapper[5108]: E0202 00:10:24.729943 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.730417 5108 scope.go:117] "RemoveContainer" containerID="45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22" Feb 02 00:10:24 crc kubenswrapper[5108]: E0202 00:10:24.730802 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:10:24 crc kubenswrapper[5108]: E0202 00:10:24.736034 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045758cb48d24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,LastTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.450554 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.732262 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.753067 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.753331 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.754261 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.754381 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.754456 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:25 crc kubenswrapper[5108]: E0202 00:10:25.755071 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.755498 5108 scope.go:117] "RemoveContainer" containerID="45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22" Feb 02 00:10:25 crc kubenswrapper[5108]: E0202 00:10:25.755791 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:10:25 crc kubenswrapper[5108]: E0202 00:10:25.760803 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045758cb48d24\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045758cb48d24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,LastTimestamp:2026-02-02 00:10:25.755750855 +0000 UTC m=+25.031247785,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:26 crc kubenswrapper[5108]: I0202 00:10:26.451560 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:27 crc kubenswrapper[5108]: I0202 00:10:27.452012 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:28 crc kubenswrapper[5108]: I0202 00:10:28.449695 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:28 crc kubenswrapper[5108]: I0202 00:10:28.698472 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:28 crc kubenswrapper[5108]: I0202 00:10:28.700478 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:28 crc kubenswrapper[5108]: I0202 00:10:28.700600 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:28 crc kubenswrapper[5108]: I0202 00:10:28.700624 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:28 crc kubenswrapper[5108]: I0202 00:10:28.700682 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:28 crc kubenswrapper[5108]: E0202 00:10:28.715514 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:10:29 crc kubenswrapper[5108]: I0202 00:10:29.451376 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:30 crc kubenswrapper[5108]: E0202 00:10:30.362058 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 02 00:10:30 crc kubenswrapper[5108]: I0202 00:10:30.452596 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:31 crc kubenswrapper[5108]: E0202 00:10:31.116351 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:10:31 crc kubenswrapper[5108]: I0202 00:10:31.453373 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:31 crc kubenswrapper[5108]: E0202 00:10:31.610461 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:10:32 crc kubenswrapper[5108]: I0202 00:10:32.452112 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:32 crc kubenswrapper[5108]: E0202 00:10:32.874680 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.447168 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:33 crc kubenswrapper[5108]: E0202 00:10:33.720119 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.721275 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.721804 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.723442 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.723521 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.723547 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:33 crc kubenswrapper[5108]: E0202 00:10:33.724488 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.725140 5108 scope.go:117] "RemoveContainer" containerID="45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22" Feb 02 00:10:33 crc kubenswrapper[5108]: E0202 00:10:33.725681 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:10:33 crc kubenswrapper[5108]: E0202 00:10:33.732175 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045758cb48d24\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045758cb48d24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,LastTimestamp:2026-02-02 00:10:33.725595416 +0000 UTC m=+33.001092386,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:34 crc kubenswrapper[5108]: I0202 00:10:34.453478 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:34 crc kubenswrapper[5108]: E0202 00:10:34.946135 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 02 00:10:35 crc kubenswrapper[5108]: I0202 00:10:35.453217 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:35 crc kubenswrapper[5108]: I0202 00:10:35.715956 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:35 crc kubenswrapper[5108]: I0202 00:10:35.718011 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:35 crc kubenswrapper[5108]: I0202 00:10:35.718076 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:35 crc kubenswrapper[5108]: I0202 00:10:35.718098 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:35 crc kubenswrapper[5108]: I0202 00:10:35.718141 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:35 crc kubenswrapper[5108]: E0202 00:10:35.734924 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:10:36 crc kubenswrapper[5108]: I0202 00:10:36.450358 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:37 crc kubenswrapper[5108]: I0202 00:10:37.453357 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:38 crc kubenswrapper[5108]: E0202 00:10:38.126832 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:10:38 crc kubenswrapper[5108]: I0202 00:10:38.452737 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:39 crc kubenswrapper[5108]: I0202 00:10:39.453966 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:40 crc kubenswrapper[5108]: I0202 00:10:40.452621 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:41 crc kubenswrapper[5108]: I0202 00:10:41.452872 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:41 crc kubenswrapper[5108]: E0202 00:10:41.611103 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:10:42 crc kubenswrapper[5108]: I0202 00:10:42.452501 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:42 crc kubenswrapper[5108]: I0202 00:10:42.735596 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:42 crc kubenswrapper[5108]: I0202 00:10:42.736910 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:42 crc kubenswrapper[5108]: I0202 00:10:42.736983 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:42 crc kubenswrapper[5108]: I0202 00:10:42.737004 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:42 crc kubenswrapper[5108]: I0202 00:10:42.737047 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:42 crc kubenswrapper[5108]: E0202 00:10:42.754028 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:10:43 crc kubenswrapper[5108]: I0202 00:10:43.453511 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:44 crc kubenswrapper[5108]: I0202 00:10:44.452355 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:45 crc kubenswrapper[5108]: E0202 00:10:45.131971 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:10:45 crc kubenswrapper[5108]: I0202 00:10:45.453691 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:46 crc kubenswrapper[5108]: E0202 00:10:46.195947 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 02 00:10:46 crc kubenswrapper[5108]: I0202 00:10:46.452666 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:47 crc kubenswrapper[5108]: E0202 00:10:47.421649 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.453094 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.557327 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.558626 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.558689 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.558710 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:47 crc kubenswrapper[5108]: E0202 00:10:47.559414 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.559839 5108 scope.go:117] "RemoveContainer" containerID="45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22" Feb 02 00:10:47 crc kubenswrapper[5108]: E0202 00:10:47.570678 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18904570f691e318\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570f691e318 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.032014616 +0000 UTC m=+4.307511556,LastTimestamp:2026-02-02 00:10:47.562973506 +0000 UTC m=+46.838470436,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:47 crc kubenswrapper[5108]: E0202 00:10:47.761811 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.812457 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 02 00:10:47 crc kubenswrapper[5108]: E0202 00:10:47.814209 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1890457102b460df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1890457102b460df openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.235601631 +0000 UTC m=+4.511098561,LastTimestamp:2026-02-02 00:10:47.80757122 +0000 UTC m=+47.083068150,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:47 crc kubenswrapper[5108]: E0202 00:10:47.833866 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045710387965a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045710387965a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.249443418 +0000 UTC m=+4.524940348,LastTimestamp:2026-02-02 00:10:47.822506536 +0000 UTC m=+47.098003466,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.450108 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.822879 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.824099 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.826483 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b" exitCode=255 Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.826524 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b"} Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.826562 5108 scope.go:117] "RemoveContainer" containerID="45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.826957 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.827858 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.827930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.827957 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:48 crc kubenswrapper[5108]: E0202 00:10:48.828532 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.828907 5108 scope.go:117] "RemoveContainer" containerID="faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b" Feb 02 00:10:48 crc kubenswrapper[5108]: E0202 00:10:48.829336 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:10:48 crc kubenswrapper[5108]: E0202 00:10:48.842973 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045758cb48d24\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045758cb48d24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,LastTimestamp:2026-02-02 00:10:48.829266091 +0000 UTC m=+48.104763061,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.450962 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.755080 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.756580 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.756644 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.756692 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.756738 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:49 crc kubenswrapper[5108]: E0202 00:10:49.776980 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.834551 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 02 00:10:50 crc kubenswrapper[5108]: I0202 00:10:50.451555 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:51 crc kubenswrapper[5108]: I0202 00:10:51.452325 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:51 crc kubenswrapper[5108]: E0202 00:10:51.612143 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:10:52 crc kubenswrapper[5108]: E0202 00:10:52.142859 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:10:52 crc kubenswrapper[5108]: I0202 00:10:52.452459 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.452608 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.721286 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.721548 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.722880 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.723026 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.723054 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:53 crc kubenswrapper[5108]: E0202 00:10:53.723811 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.724489 5108 scope.go:117] "RemoveContainer" containerID="faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b" Feb 02 00:10:53 crc kubenswrapper[5108]: E0202 00:10:53.724992 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:10:53 crc kubenswrapper[5108]: E0202 00:10:53.733444 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045758cb48d24\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045758cb48d24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,LastTimestamp:2026-02-02 00:10:53.724929747 +0000 UTC m=+53.000426717,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:54 crc kubenswrapper[5108]: I0202 00:10:54.453807 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.449514 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.752304 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.752794 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.754107 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.754172 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.754193 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:55 crc kubenswrapper[5108]: E0202 00:10:55.754893 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.755422 5108 scope.go:117] "RemoveContainer" containerID="faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b" Feb 02 00:10:55 crc kubenswrapper[5108]: E0202 00:10:55.755820 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:10:55 crc kubenswrapper[5108]: E0202 00:10:55.763721 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045758cb48d24\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045758cb48d24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,LastTimestamp:2026-02-02 00:10:55.755756658 +0000 UTC m=+55.031253628,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:56 crc kubenswrapper[5108]: I0202 00:10:56.452378 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:56 crc kubenswrapper[5108]: I0202 00:10:56.777805 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:56 crc kubenswrapper[5108]: I0202 00:10:56.779968 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:56 crc kubenswrapper[5108]: I0202 00:10:56.780170 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:56 crc kubenswrapper[5108]: I0202 00:10:56.780260 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:56 crc kubenswrapper[5108]: I0202 00:10:56.780343 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:56 crc kubenswrapper[5108]: E0202 00:10:56.793050 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:10:57 crc kubenswrapper[5108]: E0202 00:10:57.274663 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 02 00:10:57 crc kubenswrapper[5108]: I0202 00:10:57.452535 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:57 crc kubenswrapper[5108]: I0202 00:10:57.664410 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:57 crc kubenswrapper[5108]: I0202 00:10:57.664718 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:57 crc kubenswrapper[5108]: I0202 00:10:57.666777 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:57 crc kubenswrapper[5108]: I0202 00:10:57.666837 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:57 crc kubenswrapper[5108]: I0202 00:10:57.666856 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:57 crc kubenswrapper[5108]: E0202 00:10:57.667382 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:58 crc kubenswrapper[5108]: I0202 00:10:58.453776 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:59 crc kubenswrapper[5108]: E0202 00:10:59.152943 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:10:59 crc kubenswrapper[5108]: I0202 00:10:59.453965 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:00 crc kubenswrapper[5108]: I0202 00:11:00.452483 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:01 crc kubenswrapper[5108]: I0202 00:11:01.452020 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:01 crc kubenswrapper[5108]: E0202 00:11:01.613400 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:11:02 crc kubenswrapper[5108]: I0202 00:11:02.452307 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:03 crc kubenswrapper[5108]: I0202 00:11:03.455013 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:03 crc kubenswrapper[5108]: I0202 00:11:03.793992 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:03 crc kubenswrapper[5108]: I0202 00:11:03.795714 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:03 crc kubenswrapper[5108]: I0202 00:11:03.795772 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:03 crc kubenswrapper[5108]: I0202 00:11:03.795791 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:03 crc kubenswrapper[5108]: I0202 00:11:03.795832 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:11:03 crc kubenswrapper[5108]: E0202 00:11:03.809826 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:11:04 crc kubenswrapper[5108]: I0202 00:11:04.454348 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:05 crc kubenswrapper[5108]: I0202 00:11:05.453438 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:06 crc kubenswrapper[5108]: E0202 00:11:06.162218 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:11:06 crc kubenswrapper[5108]: I0202 00:11:06.452790 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:07 crc kubenswrapper[5108]: I0202 00:11:07.452386 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:07 crc kubenswrapper[5108]: I0202 00:11:07.635004 5108 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-nqwjk" Feb 02 00:11:07 crc kubenswrapper[5108]: I0202 00:11:07.644311 5108 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-nqwjk" Feb 02 00:11:07 crc kubenswrapper[5108]: I0202 00:11:07.682601 5108 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 00:11:08 crc kubenswrapper[5108]: I0202 00:11:08.273347 5108 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 00:11:08 crc kubenswrapper[5108]: I0202 00:11:08.646466 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-04 00:06:07 +0000 UTC" deadline="2026-02-27 01:31:16.615926221 +0000 UTC" Feb 02 00:11:08 crc kubenswrapper[5108]: I0202 00:11:08.646613 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="601h20m7.96932356s" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.810472 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.812017 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.812106 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.812128 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.812364 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.823655 5108 kubelet_node_status.go:127] "Node was previously registered" node="crc" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.824045 5108 kubelet_node_status.go:81] "Successfully registered node" node="crc" Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.824073 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.827509 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.827561 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.827573 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.827593 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.827607 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:10Z","lastTransitionTime":"2026-02-02T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.840155 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.852122 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.852173 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.852187 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.852210 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.852247 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:10Z","lastTransitionTime":"2026-02-02T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.905991 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.915055 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.915140 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.915155 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.915176 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.915190 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:10Z","lastTransitionTime":"2026-02-02T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.931792 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.939659 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.939719 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.939734 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.939756 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.939771 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:10Z","lastTransitionTime":"2026-02-02T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.952995 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.953170 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.953218 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.053926 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.154367 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.255557 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.355975 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.457053 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: I0202 00:11:11.557294 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.557803 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: I0202 00:11:11.558353 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:11 crc kubenswrapper[5108]: I0202 00:11:11.558434 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:11 crc kubenswrapper[5108]: I0202 00:11:11.558459 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.559295 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:11:11 crc kubenswrapper[5108]: I0202 00:11:11.559756 5108 scope.go:117] "RemoveContainer" containerID="faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.614575 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.658630 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.759337 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.859591 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.960587 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.061518 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.162658 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.263476 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: I0202 00:11:12.304939 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 02 00:11:12 crc kubenswrapper[5108]: I0202 00:11:12.306775 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0"} Feb 02 00:11:12 crc kubenswrapper[5108]: I0202 00:11:12.307099 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:12 crc kubenswrapper[5108]: I0202 00:11:12.307689 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:12 crc kubenswrapper[5108]: I0202 00:11:12.307779 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:12 crc kubenswrapper[5108]: I0202 00:11:12.307839 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.308294 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.364071 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.464803 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.566035 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.667020 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.768101 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.868311 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.969316 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.070072 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.170887 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.272072 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.373113 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.473684 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.574547 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.675328 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.776516 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.877097 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.977552 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.078746 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.179854 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.280424 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.315118 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.316338 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.318608 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" exitCode=255 Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.318722 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0"} Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.318824 5108 scope.go:117] "RemoveContainer" containerID="faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.319127 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.319930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.319991 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.320011 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.320723 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.321171 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.321534 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.381861 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.482790 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.583311 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.683794 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.784585 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.884745 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.985833 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.086714 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.187790 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.288606 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.322990 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.389162 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.490062 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.590724 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.690907 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.752318 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.752661 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.753753 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.753809 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.753829 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.755850 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.756523 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.756995 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.791954 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.893074 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.993674 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.094686 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.195581 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.296104 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.396531 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.497488 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.597879 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.698191 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.798433 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.899219 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.999623 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.100659 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.201293 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.302264 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.403377 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.504187 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.605334 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.706020 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.806579 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.907462 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.008103 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.108960 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.210135 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.310385 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.411465 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.512016 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: I0202 00:11:18.557061 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:18 crc kubenswrapper[5108]: I0202 00:11:18.558220 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:18 crc kubenswrapper[5108]: I0202 00:11:18.558306 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:18 crc kubenswrapper[5108]: I0202 00:11:18.558328 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.558879 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.612347 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.712528 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.812829 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.913297 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.013561 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.113957 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.215017 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.315666 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.416250 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.516433 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.616725 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.717905 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.818699 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.918946 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: I0202 00:11:19.922706 5108 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:11:19 crc kubenswrapper[5108]: I0202 00:11:19.971855 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Feb 02 00:11:19 crc kubenswrapper[5108]: I0202 00:11:19.988537 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.022056 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.022116 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.022134 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.022163 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.022181 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.091260 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.124912 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.124995 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.125022 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.125059 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.125083 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.190124 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.228558 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.228611 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.228624 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.228642 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.228653 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.290171 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.331546 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.331817 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.331843 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.332417 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.332446 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.446662 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.446727 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.446745 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.446768 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.446783 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.463508 5108 apiserver.go:52] "Watching apiserver" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.470434 5108 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.471013 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/machine-config-daemon-d74m7","openshift-multus/multus-q22wv","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-node-66k84","openshift-image-registry/node-ca-r6t6x","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-operator/iptables-alerter-5jnd7","openshift-dns/node-resolver-xdw92","openshift-etcd/etcd-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/multus-additional-cni-plugins-gbldp","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/network-metrics-daemon-26ppl","openshift-network-diagnostics/network-check-target-fhkjl"] Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.473347 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.473677 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.474032 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.474787 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.475275 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.477702 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.478715 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.478866 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.478897 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.480310 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.480615 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.480892 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.483179 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.483573 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.483602 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.484179 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.484195 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.484963 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.490118 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.492903 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.493931 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.494509 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.495113 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.496098 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.498508 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.499038 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.499676 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.499863 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.500196 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.500321 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.500613 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.503276 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.506984 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.509736 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.511633 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.511794 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.513138 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.518693 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.519697 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.522006 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.522913 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.523196 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.527940 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.530828 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.531061 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.530924 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.533930 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.535922 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.536314 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.536772 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.539043 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.541360 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.541376 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.541546 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.541560 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.542437 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.542608 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.543057 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.543388 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.543726 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.543755 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.548531 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.548826 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.549000 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.549102 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.549184 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.555197 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.571050 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.571407 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.571476 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.571503 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.571531 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.574473 5108 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.574602 5108 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.581385 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.584802 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.584839 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.584855 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.584941 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:21.084916243 +0000 UTC m=+80.360413183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.585524 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.594586 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.597478 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.609941 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.629700 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.641930 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.653094 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.653378 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.653518 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.653629 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.653720 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.655537 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.665543 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672441 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672480 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672512 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672544 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672572 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672595 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672620 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672644 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672668 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672693 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672719 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672742 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672769 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672793 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672818 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672843 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672869 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672894 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672918 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672941 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672967 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672990 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673035 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673081 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673103 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673124 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673147 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673182 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673204 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673248 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673303 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673326 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673349 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673376 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673399 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673422 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673446 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673471 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673499 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673620 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673646 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673671 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673701 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673725 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673749 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673773 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673795 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673818 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673842 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673865 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673888 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673912 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673968 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673992 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674020 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674043 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674084 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674111 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674209 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674258 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674284 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674310 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674336 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674360 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674385 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674409 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674438 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674472 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674495 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674518 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674544 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674569 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674593 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674618 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674650 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674676 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674703 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674728 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674753 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674775 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674800 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674824 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674849 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674873 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674897 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674922 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674947 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674970 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674995 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675024 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675051 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675077 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675103 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675127 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675151 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675175 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675199 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675240 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675266 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675292 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675317 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675343 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675381 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675417 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675453 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675490 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675527 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675564 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676060 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676082 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676076 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676346 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676536 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676777 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676996 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.677133 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.677292 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.677648 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.677671 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.677983 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.677948 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.678288 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.678613 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.678677 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.678811 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.678924 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.679182 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.679209 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:11:21.179164679 +0000 UTC m=+80.454661649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.679384 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.679691 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.679819 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.680135 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.680249 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.680547 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.680731 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.681131 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.681406 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.681807 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.682160 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.682202 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.682413 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.682476 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.682759 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683125 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683456 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683622 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683672 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683711 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683745 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683773 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683797 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683824 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683857 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683878 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683874 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683901 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683922 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683943 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683964 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683986 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684004 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684036 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684055 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684077 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684097 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684638 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684671 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684766 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684952 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.685174 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.685421 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.685430 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.685955 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686041 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686050 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686074 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686339 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686738 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686878 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686966 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687010 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687113 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687178 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687161 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687373 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687487 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687935 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688281 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688345 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688422 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688392 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688524 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688580 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688631 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688739 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688985 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689030 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689079 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689121 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689159 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689201 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689266 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689339 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689377 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689421 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689459 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689503 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689548 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689588 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689599 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689629 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689683 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689723 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689767 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689764 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689810 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689850 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689902 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689953 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690002 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690057 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690106 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690153 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690203 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690313 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690369 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690418 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692069 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692112 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692141 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692163 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692185 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692209 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692244 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692265 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692288 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692342 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692389 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692410 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692432 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694257 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694309 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694343 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694376 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694415 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694452 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694538 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695726 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695760 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695789 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695818 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696040 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696062 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696087 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696110 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696137 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696162 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696200 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696248 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696273 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696294 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696312 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696333 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696360 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696382 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696405 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696570 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696593 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696612 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696633 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696655 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696678 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696705 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696727 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696750 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696774 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696794 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696814 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696838 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696859 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696878 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696899 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696918 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697003 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-ovn-kubernetes\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697133 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697157 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8fbr\" (UniqueName: \"kubernetes.io/projected/ddd95e62-4b23-4887-b6e7-364a01924524-kube-api-access-d8fbr\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697176 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-ovn\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697196 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfgl7\" (UniqueName: \"kubernetes.io/projected/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-kube-api-access-vfgl7\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698223 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w26ft\" (UniqueName: \"kubernetes.io/projected/93334c92-cf5f-4978-b891-2b8e5ea35025-kube-api-access-w26ft\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698332 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-cni-multus\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698373 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698405 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-node-log\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698557 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698629 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93334c92-cf5f-4978-b891-2b8e5ea35025-proxy-tls\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701525 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-os-release\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701593 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-conf-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701928 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-cni-bin\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701971 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-systemd\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702010 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-etc-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702054 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-netns\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702093 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cnibin\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702130 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702172 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-netd\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702223 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-hostroot\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689967 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702311 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5434f05-9acb-4d0c-a175-d5efc97194da-hosts-file\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702356 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ddd95e62-4b23-4887-b6e7-364a01924524-host\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702377 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689979 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690067 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689967 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690087 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690079 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691166 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691280 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691449 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691486 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691487 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691827 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691881 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691900 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692661 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692807 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692898 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692980 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.693185 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.693213 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694284 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694937 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695192 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695156 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695293 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695401 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695453 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695726 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696323 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696714 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696908 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697146 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697371 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697389 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697393 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697406 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697595 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697668 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697915 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698029 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698394 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698761 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698748 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698785 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698940 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699018 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699067 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699274 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699325 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699351 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699565 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702757 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699703 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699599 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702397 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703508 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/93334c92-cf5f-4978-b891-2b8e5ea35025-mcd-auth-proxy-config\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703602 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-cni-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703645 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-socket-dir-parent\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703682 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703709 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703738 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsmhb\" (UniqueName: \"kubernetes.io/projected/0298f7da-43a3-48a4-8e32-b772a82bd62d-kube-api-access-rsmhb\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703764 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699842 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699883 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.700379 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.700405 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.700772 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701017 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701065 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701211 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701566 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701891 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701998 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701906 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702197 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702468 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702683 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702700 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698841 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703037 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.704272 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.704379 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.704482 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.704680 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705068 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705102 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705461 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705563 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705608 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-cnibin\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705636 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f5434f05-9acb-4d0c-a175-d5efc97194da-tmp-dir\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705653 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705900 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706011 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706270 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovn-node-metrics-cert\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706371 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-script-lib\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706465 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706480 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-kubelet\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706536 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-os-release\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706578 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706658 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/93334c92-cf5f-4978-b891-2b8e5ea35025-rootfs\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706709 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-k8s-cni-cncf-io\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706750 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706993 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707006 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707066 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707203 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2kbg\" (UniqueName: \"kubernetes.io/projected/f5434f05-9acb-4d0c-a175-d5efc97194da-kube-api-access-g2kbg\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707219 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.707362 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707443 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.707493 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:21.207465888 +0000 UTC m=+80.482962818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707702 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-cni-binary-copy\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707736 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ddd95e62-4b23-4887-b6e7-364a01924524-serviceca\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707796 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-systemd-units\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707821 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-etc-kubernetes\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707841 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfg4q\" (UniqueName: \"kubernetes.io/projected/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-kube-api-access-vfg4q\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707871 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707944 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707992 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-kubelet\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708028 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-log-socket\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708090 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-system-cni-dir\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708127 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxtcp\" (UniqueName: \"kubernetes.io/projected/f77c18f0-131e-482e-8e09-602b39b0c163-kube-api-access-mxtcp\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708195 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708308 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.708385 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.708493 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:21.208464924 +0000 UTC m=+80.483961874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708537 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-netns\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708569 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-bin\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708602 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-system-cni-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708629 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708634 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft9m5\" (UniqueName: \"kubernetes.io/projected/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-kube-api-access-ft9m5\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708686 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708711 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-var-lib-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708729 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-config\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708746 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-env-overrides\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708777 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-slash\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708920 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708946 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-daemon-config\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708968 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-multus-certs\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708989 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709211 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709250 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709265 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709385 5108 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709448 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709503 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709528 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709577 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709595 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709609 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709693 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709709 5108 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709724 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709886 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709901 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709914 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.710322 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.710369 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.710384 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711049 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711071 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711133 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711148 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711163 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711178 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711196 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711210 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711261 5108 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711280 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711299 5108 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711317 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711330 5108 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711346 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711359 5108 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711373 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711386 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711399 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711412 5108 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711427 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711440 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711455 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711470 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711484 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711500 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711516 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711529 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711546 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711559 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711574 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711562 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711587 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711699 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711722 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711744 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711765 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711785 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711805 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711824 5108 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711845 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711866 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711886 5108 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711908 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711927 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711946 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711967 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711985 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712003 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712022 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712040 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712057 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712077 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712096 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712114 5108 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712135 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712155 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712174 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712219 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712269 5108 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712338 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712359 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712380 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712399 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712421 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712442 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712459 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712478 5108 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712499 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712519 5108 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712544 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712572 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712597 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712676 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712698 5108 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712717 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712722 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712739 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712816 5108 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712843 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712864 5108 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712884 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712910 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712932 5108 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712960 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712982 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713003 5108 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713025 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713047 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713067 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713086 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713142 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713165 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713306 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713366 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713390 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713410 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713432 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713454 5108 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713475 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713494 5108 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713516 5108 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713536 5108 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713555 5108 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713576 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713596 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713614 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713654 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713672 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713695 5108 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713714 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713734 5108 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713753 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713771 5108 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713790 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713808 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713827 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713846 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713864 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713881 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713927 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713945 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713962 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713981 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.714038 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.714369 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.715218 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.716617 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.718064 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.718082 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.719903 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.720210 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.720793 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.721021 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.720995 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.721171 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.721438 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.722257 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.722303 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.722682 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.722949 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.726317 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.727147 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.727908 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.727934 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.727985 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.728021 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.728266 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.728381 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.728471 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.728645 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:21.228614718 +0000 UTC m=+80.504111768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.728863 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.728862 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.728919 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.728932 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.734883 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.735089 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.735377 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.735386 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.735544 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.735700 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.736044 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.736084 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.736192 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.736416 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.736435 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.736682 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.737385 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.737629 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.737843 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738075 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738112 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738361 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738376 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738462 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738485 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738587 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738671 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738802 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738866 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738936 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739261 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739465 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739453 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739479 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739759 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739764 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739874 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739980 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740050 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740087 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740132 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740265 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740405 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740563 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740707 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740808 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.741014 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.741329 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.742552 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.742871 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.743104 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.743118 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.743288 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.744534 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.744576 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.744715 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.747187 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.747259 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.751548 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.752409 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.760101 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.760178 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.760196 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.760246 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.760262 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.760601 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.774104 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.775130 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.785074 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.786124 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.794064 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.799336 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.800401 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.813405 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815170 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-kubelet\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815195 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-log-socket\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815215 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-system-cni-dir\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815247 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mxtcp\" (UniqueName: \"kubernetes.io/projected/f77c18f0-131e-482e-8e09-602b39b0c163-kube-api-access-mxtcp\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815270 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-netns\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815285 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-bin\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815301 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-system-cni-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815317 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ft9m5\" (UniqueName: \"kubernetes.io/projected/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-kube-api-access-ft9m5\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815341 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-var-lib-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815360 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-config\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815375 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-env-overrides\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815395 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-slash\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815413 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815431 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-daemon-config\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815447 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-multus-certs\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815464 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815482 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-ovn-kubernetes\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815501 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d8fbr\" (UniqueName: \"kubernetes.io/projected/ddd95e62-4b23-4887-b6e7-364a01924524-kube-api-access-d8fbr\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815519 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-ovn\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815538 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vfgl7\" (UniqueName: \"kubernetes.io/projected/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-kube-api-access-vfgl7\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815558 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w26ft\" (UniqueName: \"kubernetes.io/projected/93334c92-cf5f-4978-b891-2b8e5ea35025-kube-api-access-w26ft\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815577 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-cni-multus\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815593 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-node-log\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815611 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815629 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93334c92-cf5f-4978-b891-2b8e5ea35025-proxy-tls\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815651 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-os-release\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815667 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-conf-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815686 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-cni-bin\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815702 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-systemd\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815717 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-etc-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815736 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-netns\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815752 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cnibin\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815767 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815785 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-netd\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815801 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-hostroot\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815819 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5434f05-9acb-4d0c-a175-d5efc97194da-hosts-file\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815834 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ddd95e62-4b23-4887-b6e7-364a01924524-host\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815850 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815870 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/93334c92-cf5f-4978-b891-2b8e5ea35025-mcd-auth-proxy-config\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815801 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815972 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-cni-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816014 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-kubelet\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816038 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-log-socket\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816064 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-system-cni-dir\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815902 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-cni-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816241 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-socket-dir-parent\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816287 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816321 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816356 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rsmhb\" (UniqueName: \"kubernetes.io/projected/0298f7da-43a3-48a4-8e32-b772a82bd62d-kube-api-access-rsmhb\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816391 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816438 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-cnibin\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816527 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-netns\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816558 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-cnibin\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816648 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-cni-multus\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816766 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816833 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ddd95e62-4b23-4887-b6e7-364a01924524-host\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816847 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-os-release\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816849 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-socket-dir-parent\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816884 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-conf-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816917 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-cni-bin\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816918 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5434f05-9acb-4d0c-a175-d5efc97194da-hosts-file\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816942 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-systemd\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817015 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-etc-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817059 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-bin\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817109 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817307 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-ovn-kubernetes\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817366 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-netd\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817386 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817478 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-system-cni-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817814 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-multus-certs\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817842 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-var-lib-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817914 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-node-log\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817957 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cnibin\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817991 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-ovn\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818031 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-slash\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818060 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-hostroot\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818067 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818156 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.818496 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 02 00:11:20 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: source /etc/kubernetes/apiserver-url.env Feb 02 00:11:20 crc kubenswrapper[5108]: else Feb 02 00:11:20 crc kubenswrapper[5108]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 02 00:11:20 crc kubenswrapper[5108]: exit 1 Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818536 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/93334c92-cf5f-4978-b891-2b8e5ea35025-mcd-auth-proxy-config\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818567 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818649 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f5434f05-9acb-4d0c-a175-d5efc97194da-tmp-dir\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818693 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-netns\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818770 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovn-node-metrics-cert\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818799 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-script-lib\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818846 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-kubelet\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818880 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-os-release\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818913 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818933 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/93334c92-cf5f-4978-b891-2b8e5ea35025-rootfs\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818952 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-k8s-cni-cncf-io\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818975 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819029 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g2kbg\" (UniqueName: \"kubernetes.io/projected/f5434f05-9acb-4d0c-a175-d5efc97194da-kube-api-access-g2kbg\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819052 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-cni-binary-copy\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819069 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ddd95e62-4b23-4887-b6e7-364a01924524-serviceca\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819087 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-systemd-units\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819131 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-etc-kubernetes\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819148 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vfg4q\" (UniqueName: \"kubernetes.io/projected/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-kube-api-access-vfg4q\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819191 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.819321 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819340 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-config\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.819375 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:21.319357871 +0000 UTC m=+80.594854801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819416 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-systemd-units\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820292 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-script-lib\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820438 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-env-overrides\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820509 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f5434f05-9acb-4d0c-a175-d5efc97194da-tmp-dir\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.820578 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820668 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-etc-kubernetes\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820694 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-kubelet\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820746 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-os-release\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820961 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820795 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-k8s-cni-cncf-io\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.821154 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.821809 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/93334c92-cf5f-4978-b891-2b8e5ea35025-rootfs\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.821876 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-daemon-config\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.821940 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-cni-binary-copy\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822025 5108 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822088 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822102 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822115 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822125 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822139 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822152 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822170 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822180 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822190 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822199 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822209 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822218 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822246 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822256 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822268 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822280 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822290 5108 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822627 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822299 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823377 5108 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823390 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823401 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823411 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823420 5108 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823430 5108 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823440 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823449 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823459 5108 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823529 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823541 5108 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823553 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823563 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824309 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824323 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824334 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824344 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824353 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824363 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824374 5108 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824385 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824395 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824404 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.833209 5108 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.833351 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.833370 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93334c92-cf5f-4978-b891-2b8e5ea35025-proxy-tls\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.833548 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.833967 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834073 5108 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834167 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834283 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834416 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834498 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834567 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834753 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834836 5108 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834957 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835030 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835087 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835147 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.833793 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835262 5108 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834220 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834047 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovn-node-metrics-cert\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835445 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835569 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835583 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835594 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835606 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835617 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835629 5108 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835640 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835652 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835663 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835673 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835684 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835698 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835710 5108 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835720 5108 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835734 5108 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835747 5108 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835757 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835768 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835778 5108 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835789 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835801 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835811 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.839355 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ddd95e62-4b23-4887-b6e7-364a01924524-serviceca\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.840004 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8fbr\" (UniqueName: \"kubernetes.io/projected/ddd95e62-4b23-4887-b6e7-364a01924524-kube-api-access-d8fbr\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.840451 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.840894 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: source "/env/_master" Feb 02 00:11:20 crc kubenswrapper[5108]: set +o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 02 00:11:20 crc kubenswrapper[5108]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 02 00:11:20 crc kubenswrapper[5108]: ho_enable="--enable-hybrid-overlay" Feb 02 00:11:20 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 02 00:11:20 crc kubenswrapper[5108]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 02 00:11:20 crc kubenswrapper[5108]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 02 00:11:20 crc kubenswrapper[5108]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 02 00:11:20 crc kubenswrapper[5108]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --webhook-host=127.0.0.1 \ Feb 02 00:11:20 crc kubenswrapper[5108]: --webhook-port=9743 \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${ho_enable} \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-interconnect \ Feb 02 00:11:20 crc kubenswrapper[5108]: --disable-approver \ Feb 02 00:11:20 crc kubenswrapper[5108]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --wait-for-kubernetes-api=200s \ Feb 02 00:11:20 crc kubenswrapper[5108]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --loglevel="${LOGLEVEL}" Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.842153 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2kbg\" (UniqueName: \"kubernetes.io/projected/f5434f05-9acb-4d0c-a175-d5efc97194da-kube-api-access-g2kbg\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.843287 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsmhb\" (UniqueName: \"kubernetes.io/projected/0298f7da-43a3-48a4-8e32-b772a82bd62d-kube-api-access-rsmhb\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.843393 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfg4q\" (UniqueName: \"kubernetes.io/projected/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-kube-api-access-vfg4q\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.846439 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxtcp\" (UniqueName: \"kubernetes.io/projected/f77c18f0-131e-482e-8e09-602b39b0c163-kube-api-access-mxtcp\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.846541 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w26ft\" (UniqueName: \"kubernetes.io/projected/93334c92-cf5f-4978-b891-2b8e5ea35025-kube-api-access-w26ft\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.847144 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfgl7\" (UniqueName: \"kubernetes.io/projected/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-kube-api-access-vfgl7\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.847559 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft9m5\" (UniqueName: \"kubernetes.io/projected/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-kube-api-access-ft9m5\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.850742 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: W0202 00:11:20.853359 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddd95e62_4b23_4887_b6e7_364a01924524.slice/crio-896942b9503dfb123e81fe12f3e839f49bd2881d35de050a50cfa0fc867bb9e6 WatchSource:0}: Error finding container 896942b9503dfb123e81fe12f3e839f49bd2881d35de050a50cfa0fc867bb9e6: Status 404 returned error can't find the container with id 896942b9503dfb123e81fe12f3e839f49bd2881d35de050a50cfa0fc867bb9e6 Feb 02 00:11:20 crc kubenswrapper[5108]: W0202 00:11:20.854980 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-5f555816d6ec189f7bd3d7e5ba213cdc54e4ba6984fd49cb3eb011639902fdde WatchSource:0}: Error finding container 5f555816d6ec189f7bd3d7e5ba213cdc54e4ba6984fd49cb3eb011639902fdde: Status 404 returned error can't find the container with id 5f555816d6ec189f7bd3d7e5ba213cdc54e4ba6984fd49cb3eb011639902fdde Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.856613 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 02 00:11:20 crc kubenswrapper[5108]: while [ true ]; Feb 02 00:11:20 crc kubenswrapper[5108]: do Feb 02 00:11:20 crc kubenswrapper[5108]: for f in $(ls /tmp/serviceca); do Feb 02 00:11:20 crc kubenswrapper[5108]: echo $f Feb 02 00:11:20 crc kubenswrapper[5108]: ca_file_path="/tmp/serviceca/${f}" Feb 02 00:11:20 crc kubenswrapper[5108]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 02 00:11:20 crc kubenswrapper[5108]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 02 00:11:20 crc kubenswrapper[5108]: if [ -e "${reg_dir_path}" ]; then Feb 02 00:11:20 crc kubenswrapper[5108]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 02 00:11:20 crc kubenswrapper[5108]: else Feb 02 00:11:20 crc kubenswrapper[5108]: mkdir $reg_dir_path Feb 02 00:11:20 crc kubenswrapper[5108]: cp $ca_file_path $reg_dir_path/ca.crt Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: for d in $(ls /etc/docker/certs.d); do Feb 02 00:11:20 crc kubenswrapper[5108]: echo $d Feb 02 00:11:20 crc kubenswrapper[5108]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 02 00:11:20 crc kubenswrapper[5108]: reg_conf_path="/tmp/serviceca/${dp}" Feb 02 00:11:20 crc kubenswrapper[5108]: if [ ! -e "${reg_conf_path}" ]; then Feb 02 00:11:20 crc kubenswrapper[5108]: rm -rf /etc/docker/certs.d/$d Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: sleep 60 & wait ${!} Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8fbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-r6t6x_openshift-image-registry(ddd95e62-4b23-4887-b6e7-364a01924524): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.859824 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: source "/env/_master" Feb 02 00:11:20 crc kubenswrapper[5108]: set +o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 02 00:11:20 crc kubenswrapper[5108]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 02 00:11:20 crc kubenswrapper[5108]: --disable-webhook \ Feb 02 00:11:20 crc kubenswrapper[5108]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --loglevel="${LOGLEVEL}" Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.859904 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-r6t6x" podUID="ddd95e62-4b23-4887-b6e7-364a01924524" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.859988 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.861145 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.861525 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.865493 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.865534 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.865571 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.865584 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.865602 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.865614 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.866831 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.871135 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.873379 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: W0202 00:11:20.874406 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0298f7da_43a3_48a4_8e32_b772a82bd62d.slice/crio-b2c9667b3266dc7724f630d2a6f5b000f311e7134a92929d6e1f8855fc654058 WatchSource:0}: Error finding container b2c9667b3266dc7724f630d2a6f5b000f311e7134a92929d6e1f8855fc654058: Status 404 returned error can't find the container with id b2c9667b3266dc7724f630d2a6f5b000f311e7134a92929d6e1f8855fc654058 Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.878600 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 02 00:11:20 crc kubenswrapper[5108]: set -euo pipefail Feb 02 00:11:20 crc kubenswrapper[5108]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 02 00:11:20 crc kubenswrapper[5108]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 02 00:11:20 crc kubenswrapper[5108]: # As the secret mount is optional we must wait for the files to be present. Feb 02 00:11:20 crc kubenswrapper[5108]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 02 00:11:20 crc kubenswrapper[5108]: TS=$(date +%s) Feb 02 00:11:20 crc kubenswrapper[5108]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 02 00:11:20 crc kubenswrapper[5108]: HAS_LOGGED_INFO=0 Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: log_missing_certs(){ Feb 02 00:11:20 crc kubenswrapper[5108]: CUR_TS=$(date +%s) Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 02 00:11:20 crc kubenswrapper[5108]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 02 00:11:20 crc kubenswrapper[5108]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 02 00:11:20 crc kubenswrapper[5108]: HAS_LOGGED_INFO=1 Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: } Feb 02 00:11:20 crc kubenswrapper[5108]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 02 00:11:20 crc kubenswrapper[5108]: log_missing_certs Feb 02 00:11:20 crc kubenswrapper[5108]: sleep 5 Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 02 00:11:20 crc kubenswrapper[5108]: exec /usr/bin/kube-rbac-proxy \ Feb 02 00:11:20 crc kubenswrapper[5108]: --logtostderr \ Feb 02 00:11:20 crc kubenswrapper[5108]: --secure-listen-address=:9108 \ Feb 02 00:11:20 crc kubenswrapper[5108]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 02 00:11:20 crc kubenswrapper[5108]: --upstream=http://127.0.0.1:29108/ \ Feb 02 00:11:20 crc kubenswrapper[5108]: --tls-private-key-file=${TLS_PK} \ Feb 02 00:11:20 crc kubenswrapper[5108]: --tls-cert-file=${TLS_CERT} Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsmhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ccnbr_openshift-ovn-kubernetes(0298f7da-43a3-48a4-8e32-b772a82bd62d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.882883 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: source "/env/_master" Feb 02 00:11:20 crc kubenswrapper[5108]: set +o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v4_join_subnet_opt= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v6_join_subnet_opt= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v4_transit_switch_subnet_opt= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v6_transit_switch_subnet_opt= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: dns_name_resolver_enabled_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # This is needed so that converting clusters from GA to TP Feb 02 00:11:20 crc kubenswrapper[5108]: # will rollout control plane pods as well Feb 02 00:11:20 crc kubenswrapper[5108]: network_segmentation_enabled_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: multi_network_enabled_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: multi_network_enabled_flag="--enable-multi-network" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "true" != "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: multi_network_enabled_flag="--enable-multi-network" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: route_advertisements_enable_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: preconfigured_udn_addresses_enable_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # Enable multi-network policy if configured (control-plane always full mode) Feb 02 00:11:20 crc kubenswrapper[5108]: multi_network_policy_enabled_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # Enable admin network policy if configured (control-plane always full mode) Feb 02 00:11:20 crc kubenswrapper[5108]: admin_network_policy_enabled_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: if [ "shared" == "shared" ]; then Feb 02 00:11:20 crc kubenswrapper[5108]: gateway_mode_flags="--gateway-mode shared" Feb 02 00:11:20 crc kubenswrapper[5108]: elif [ "shared" == "local" ]; then Feb 02 00:11:20 crc kubenswrapper[5108]: gateway_mode_flags="--gateway-mode local" Feb 02 00:11:20 crc kubenswrapper[5108]: else Feb 02 00:11:20 crc kubenswrapper[5108]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 02 00:11:20 crc kubenswrapper[5108]: exit 1 Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 02 00:11:20 crc kubenswrapper[5108]: exec /usr/bin/ovnkube \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-interconnect \ Feb 02 00:11:20 crc kubenswrapper[5108]: --init-cluster-manager "${K8S_NODE}" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 02 00:11:20 crc kubenswrapper[5108]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --metrics-bind-address "127.0.0.1:29108" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --metrics-enable-pprof \ Feb 02 00:11:20 crc kubenswrapper[5108]: --metrics-enable-config-duration \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${ovn_v4_join_subnet_opt} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${ovn_v6_join_subnet_opt} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${dns_name_resolver_enabled_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${persistent_ips_enabled_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${multi_network_enabled_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${network_segmentation_enabled_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${gateway_mode_flags} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${route_advertisements_enable_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${preconfigured_udn_addresses_enable_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-egress-ip=true \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-egress-firewall=true \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-egress-qos=true \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-egress-service=true \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-multicast \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-multi-external-gateway=true \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${multi_network_policy_enabled_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${admin_network_policy_enabled_flag} Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsmhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ccnbr_openshift-ovn-kubernetes(0298f7da-43a3-48a4-8e32-b772a82bd62d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.883968 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" Feb 02 00:11:20 crc kubenswrapper[5108]: W0202 00:11:20.883986 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24f8cedc_9b82_4ef7_a7db_4ce57803e0ce.slice/crio-61e808d3ffdc264d45983a8def8fd8ab9b983bc91f4dc5058ee391798edad7f4 WatchSource:0}: Error finding container 61e808d3ffdc264d45983a8def8fd8ab9b983bc91f4dc5058ee391798edad7f4: Status 404 returned error can't find the container with id 61e808d3ffdc264d45983a8def8fd8ab9b983bc91f4dc5058ee391798edad7f4 Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.886112 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.886609 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 02 00:11:20 crc kubenswrapper[5108]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 02 00:11:20 crc kubenswrapper[5108]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfg4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-q22wv_openshift-multus(24f8cedc-9b82-4ef7-a7db-4ce57803e0ce): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.888479 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-q22wv" podUID="24f8cedc-9b82-4ef7-a7db-4ce57803e0ce" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.890128 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.900351 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 02 00:11:20 crc kubenswrapper[5108]: set -uo pipefail Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 02 00:11:20 crc kubenswrapper[5108]: HOSTS_FILE="/etc/hosts" Feb 02 00:11:20 crc kubenswrapper[5108]: TEMP_FILE="/tmp/hosts.tmp" Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # Make a temporary file with the old hosts file's attributes. Feb 02 00:11:20 crc kubenswrapper[5108]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 02 00:11:20 crc kubenswrapper[5108]: echo "Failed to preserve hosts file. Exiting." Feb 02 00:11:20 crc kubenswrapper[5108]: exit 1 Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: while true; do Feb 02 00:11:20 crc kubenswrapper[5108]: declare -A svc_ips Feb 02 00:11:20 crc kubenswrapper[5108]: for svc in "${services[@]}"; do Feb 02 00:11:20 crc kubenswrapper[5108]: # Fetch service IP from cluster dns if present. We make several tries Feb 02 00:11:20 crc kubenswrapper[5108]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 02 00:11:20 crc kubenswrapper[5108]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 02 00:11:20 crc kubenswrapper[5108]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 02 00:11:20 crc kubenswrapper[5108]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 02 00:11:20 crc kubenswrapper[5108]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 02 00:11:20 crc kubenswrapper[5108]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 02 00:11:20 crc kubenswrapper[5108]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 02 00:11:20 crc kubenswrapper[5108]: for i in ${!cmds[*]} Feb 02 00:11:20 crc kubenswrapper[5108]: do Feb 02 00:11:20 crc kubenswrapper[5108]: ips=($(eval "${cmds[i]}")) Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: svc_ips["${svc}"]="${ips[@]}" Feb 02 00:11:20 crc kubenswrapper[5108]: break Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # Update /etc/hosts only if we get valid service IPs Feb 02 00:11:20 crc kubenswrapper[5108]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 02 00:11:20 crc kubenswrapper[5108]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 02 00:11:20 crc kubenswrapper[5108]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 02 00:11:20 crc kubenswrapper[5108]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 02 00:11:20 crc kubenswrapper[5108]: sleep 60 & wait Feb 02 00:11:20 crc kubenswrapper[5108]: continue Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # Append resolver entries for services Feb 02 00:11:20 crc kubenswrapper[5108]: rc=0 Feb 02 00:11:20 crc kubenswrapper[5108]: for svc in "${!svc_ips[@]}"; do Feb 02 00:11:20 crc kubenswrapper[5108]: for ip in ${svc_ips[${svc}]}; do Feb 02 00:11:20 crc kubenswrapper[5108]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ $rc -ne 0 ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: sleep 60 & wait Feb 02 00:11:20 crc kubenswrapper[5108]: continue Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 02 00:11:20 crc kubenswrapper[5108]: # Replace /etc/hosts with our modified version if needed Feb 02 00:11:20 crc kubenswrapper[5108]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 02 00:11:20 crc kubenswrapper[5108]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: sleep 60 & wait Feb 02 00:11:20 crc kubenswrapper[5108]: unset svc_ips Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2kbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-xdw92_openshift-dns(f5434f05-9acb-4d0c-a175-d5efc97194da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.901430 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-xdw92" podUID="f5434f05-9acb-4d0c-a175-d5efc97194da" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.903304 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.911318 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.914086 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.918273 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: W0202 00:11:20.924469 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod131f7f53_e6cd_4e60_87d5_5a67b6f40b76.slice/crio-0aa5086fea2429e6fed52dc6dce891b95283b5b90be333f7067ae7a3bd80420e WatchSource:0}: Error finding container 0aa5086fea2429e6fed52dc6dce891b95283b5b90be333f7067ae7a3bd80420e: Status 404 returned error can't find the container with id 0aa5086fea2429e6fed52dc6dce891b95283b5b90be333f7067ae7a3bd80420e Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.927250 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ft9m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-gbldp_openshift-multus(131f7f53-e6cd-4e60-87d5-5a67b6f40b76): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.928616 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-gbldp" podUID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" Feb 02 00:11:20 crc kubenswrapper[5108]: W0202 00:11:20.930952 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93334c92_cf5f_4978_b891_2b8e5ea35025.slice/crio-1f70210d957ec5ce7db7c62f748d782e0b8fc0f4431be452c3767c2bc1c0895e WatchSource:0}: Error finding container 1f70210d957ec5ce7db7c62f748d782e0b8fc0f4431be452c3767c2bc1c0895e: Status 404 returned error can't find the container with id 1f70210d957ec5ce7db7c62f748d782e0b8fc0f4431be452c3767c2bc1c0895e Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.934847 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-d74m7_openshift-machine-config-operator(93334c92-cf5f-4978-b891-2b8e5ea35025): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.938259 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-d74m7_openshift-machine-config-operator(93334c92-cf5f-4978-b891-2b8e5ea35025): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.940536 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.968395 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.968501 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.968515 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.968538 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.968552 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.072304 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.072399 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.072426 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.072456 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.072477 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.140867 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.141104 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.141135 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.141160 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.141352 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:22.141223445 +0000 UTC m=+81.416720415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.147558 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.160914 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 02 00:11:21 crc kubenswrapper[5108]: apiVersion: v1 Feb 02 00:11:21 crc kubenswrapper[5108]: clusters: Feb 02 00:11:21 crc kubenswrapper[5108]: - cluster: Feb 02 00:11:21 crc kubenswrapper[5108]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 02 00:11:21 crc kubenswrapper[5108]: server: https://api-int.crc.testing:6443 Feb 02 00:11:21 crc kubenswrapper[5108]: name: default-cluster Feb 02 00:11:21 crc kubenswrapper[5108]: contexts: Feb 02 00:11:21 crc kubenswrapper[5108]: - context: Feb 02 00:11:21 crc kubenswrapper[5108]: cluster: default-cluster Feb 02 00:11:21 crc kubenswrapper[5108]: namespace: default Feb 02 00:11:21 crc kubenswrapper[5108]: user: default-auth Feb 02 00:11:21 crc kubenswrapper[5108]: name: default-context Feb 02 00:11:21 crc kubenswrapper[5108]: current-context: default-context Feb 02 00:11:21 crc kubenswrapper[5108]: kind: Config Feb 02 00:11:21 crc kubenswrapper[5108]: preferences: {} Feb 02 00:11:21 crc kubenswrapper[5108]: users: Feb 02 00:11:21 crc kubenswrapper[5108]: - name: default-auth Feb 02 00:11:21 crc kubenswrapper[5108]: user: Feb 02 00:11:21 crc kubenswrapper[5108]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 02 00:11:21 crc kubenswrapper[5108]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 02 00:11:21 crc kubenswrapper[5108]: EOF Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfgl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-66k84_openshift-ovn-kubernetes(d0c5973e-49ea-41a0-87d5-c8e867ee8a66): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.162147 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.174285 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.174321 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.174334 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.174353 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.174366 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.242176 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.242441 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.242501 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:11:22.242455335 +0000 UTC m=+81.517952305 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.242638 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.242675 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.242727 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.242779 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:22.242741463 +0000 UTC m=+81.518238433 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.242916 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.242919 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.243071 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:22.24303809 +0000 UTC m=+81.518535060 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.242942 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.243211 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.243450 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:22.24340199 +0000 UTC m=+81.518898960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.276949 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.277026 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.277051 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.277084 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.277112 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.293443 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.293529 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.293549 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.293578 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.293599 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.306330 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.311093 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.311136 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.311147 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.311164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.311176 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.325900 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.332382 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.332457 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.332477 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.332508 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.332533 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.342288 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"9893258dab7a033d522aebee422e4d3ac3767f3fa09f53c77a4ed6caa75683e5"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.343590 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.343728 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.343822 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:22.343797859 +0000 UTC m=+81.619294789 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.343982 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q22wv" event={"ID":"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce","Type":"ContainerStarted","Data":"61e808d3ffdc264d45983a8def8fd8ab9b983bc91f4dc5058ee391798edad7f4"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.346426 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 02 00:11:21 crc kubenswrapper[5108]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 02 00:11:21 crc kubenswrapper[5108]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfg4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-q22wv_openshift-multus(24f8cedc-9b82-4ef7-a7db-4ce57803e0ce): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.346500 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 02 00:11:21 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: source /etc/kubernetes/apiserver-url.env Feb 02 00:11:21 crc kubenswrapper[5108]: else Feb 02 00:11:21 crc kubenswrapper[5108]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 02 00:11:21 crc kubenswrapper[5108]: exit 1 Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.348120 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.348336 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-q22wv" podUID="24f8cedc-9b82-4ef7-a7db-4ce57803e0ce" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.348522 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"1f70210d957ec5ce7db7c62f748d782e0b8fc0f4431be452c3767c2bc1c0895e"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.348521 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.351342 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerStarted","Data":"0aa5086fea2429e6fed52dc6dce891b95283b5b90be333f7067ae7a3bd80420e"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.351320 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-d74m7_openshift-machine-config-operator(93334c92-cf5f-4978-b891-2b8e5ea35025): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.353442 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-d74m7_openshift-machine-config-operator(93334c92-cf5f-4978-b891-2b8e5ea35025): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.354044 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.354104 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.354128 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.354164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.354189 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.354212 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ft9m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-gbldp_openshift-multus(131f7f53-e6cd-4e60-87d5-5a67b6f40b76): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.354540 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.355404 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-gbldp" podUID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.355943 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xdw92" event={"ID":"f5434f05-9acb-4d0c-a175-d5efc97194da","Type":"ContainerStarted","Data":"11177a9280a46b5ae3e32cd16fd55c985bd85a843c725f73bb7e0729cf24754b"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.362246 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" event={"ID":"0298f7da-43a3-48a4-8e32-b772a82bd62d","Type":"ContainerStarted","Data":"b2c9667b3266dc7724f630d2a6f5b000f311e7134a92929d6e1f8855fc654058"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.363133 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 02 00:11:21 crc kubenswrapper[5108]: set -uo pipefail Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 02 00:11:21 crc kubenswrapper[5108]: HOSTS_FILE="/etc/hosts" Feb 02 00:11:21 crc kubenswrapper[5108]: TEMP_FILE="/tmp/hosts.tmp" Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # Make a temporary file with the old hosts file's attributes. Feb 02 00:11:21 crc kubenswrapper[5108]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 02 00:11:21 crc kubenswrapper[5108]: echo "Failed to preserve hosts file. Exiting." Feb 02 00:11:21 crc kubenswrapper[5108]: exit 1 Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: while true; do Feb 02 00:11:21 crc kubenswrapper[5108]: declare -A svc_ips Feb 02 00:11:21 crc kubenswrapper[5108]: for svc in "${services[@]}"; do Feb 02 00:11:21 crc kubenswrapper[5108]: # Fetch service IP from cluster dns if present. We make several tries Feb 02 00:11:21 crc kubenswrapper[5108]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 02 00:11:21 crc kubenswrapper[5108]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 02 00:11:21 crc kubenswrapper[5108]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 02 00:11:21 crc kubenswrapper[5108]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 02 00:11:21 crc kubenswrapper[5108]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 02 00:11:21 crc kubenswrapper[5108]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 02 00:11:21 crc kubenswrapper[5108]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 02 00:11:21 crc kubenswrapper[5108]: for i in ${!cmds[*]} Feb 02 00:11:21 crc kubenswrapper[5108]: do Feb 02 00:11:21 crc kubenswrapper[5108]: ips=($(eval "${cmds[i]}")) Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: svc_ips["${svc}"]="${ips[@]}" Feb 02 00:11:21 crc kubenswrapper[5108]: break Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # Update /etc/hosts only if we get valid service IPs Feb 02 00:11:21 crc kubenswrapper[5108]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 02 00:11:21 crc kubenswrapper[5108]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 02 00:11:21 crc kubenswrapper[5108]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 02 00:11:21 crc kubenswrapper[5108]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 02 00:11:21 crc kubenswrapper[5108]: sleep 60 & wait Feb 02 00:11:21 crc kubenswrapper[5108]: continue Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # Append resolver entries for services Feb 02 00:11:21 crc kubenswrapper[5108]: rc=0 Feb 02 00:11:21 crc kubenswrapper[5108]: for svc in "${!svc_ips[@]}"; do Feb 02 00:11:21 crc kubenswrapper[5108]: for ip in ${svc_ips[${svc}]}; do Feb 02 00:11:21 crc kubenswrapper[5108]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ $rc -ne 0 ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: sleep 60 & wait Feb 02 00:11:21 crc kubenswrapper[5108]: continue Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 02 00:11:21 crc kubenswrapper[5108]: # Replace /etc/hosts with our modified version if needed Feb 02 00:11:21 crc kubenswrapper[5108]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 02 00:11:21 crc kubenswrapper[5108]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: sleep 60 & wait Feb 02 00:11:21 crc kubenswrapper[5108]: unset svc_ips Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2kbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-xdw92_openshift-dns(f5434f05-9acb-4d0c-a175-d5efc97194da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.364196 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"7a2461c6a473f94ba1ea1904c2b0cd4abbd44d50e56c3ab93bba762c867a78ab"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.364343 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-xdw92" podUID="f5434f05-9acb-4d0c-a175-d5efc97194da" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.365777 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 02 00:11:21 crc kubenswrapper[5108]: set -euo pipefail Feb 02 00:11:21 crc kubenswrapper[5108]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 02 00:11:21 crc kubenswrapper[5108]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 02 00:11:21 crc kubenswrapper[5108]: # As the secret mount is optional we must wait for the files to be present. Feb 02 00:11:21 crc kubenswrapper[5108]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 02 00:11:21 crc kubenswrapper[5108]: TS=$(date +%s) Feb 02 00:11:21 crc kubenswrapper[5108]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 02 00:11:21 crc kubenswrapper[5108]: HAS_LOGGED_INFO=0 Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: log_missing_certs(){ Feb 02 00:11:21 crc kubenswrapper[5108]: CUR_TS=$(date +%s) Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 02 00:11:21 crc kubenswrapper[5108]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 02 00:11:21 crc kubenswrapper[5108]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 02 00:11:21 crc kubenswrapper[5108]: HAS_LOGGED_INFO=1 Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: } Feb 02 00:11:21 crc kubenswrapper[5108]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 02 00:11:21 crc kubenswrapper[5108]: log_missing_certs Feb 02 00:11:21 crc kubenswrapper[5108]: sleep 5 Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 02 00:11:21 crc kubenswrapper[5108]: exec /usr/bin/kube-rbac-proxy \ Feb 02 00:11:21 crc kubenswrapper[5108]: --logtostderr \ Feb 02 00:11:21 crc kubenswrapper[5108]: --secure-listen-address=:9108 \ Feb 02 00:11:21 crc kubenswrapper[5108]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 02 00:11:21 crc kubenswrapper[5108]: --upstream=http://127.0.0.1:29108/ \ Feb 02 00:11:21 crc kubenswrapper[5108]: --tls-private-key-file=${TLS_PK} \ Feb 02 00:11:21 crc kubenswrapper[5108]: --tls-cert-file=${TLS_CERT} Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsmhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ccnbr_openshift-ovn-kubernetes(0298f7da-43a3-48a4-8e32-b772a82bd62d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.365961 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"5f555816d6ec189f7bd3d7e5ba213cdc54e4ba6984fd49cb3eb011639902fdde"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.366048 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.368995 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 02 00:11:21 crc kubenswrapper[5108]: apiVersion: v1 Feb 02 00:11:21 crc kubenswrapper[5108]: clusters: Feb 02 00:11:21 crc kubenswrapper[5108]: - cluster: Feb 02 00:11:21 crc kubenswrapper[5108]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 02 00:11:21 crc kubenswrapper[5108]: server: https://api-int.crc.testing:6443 Feb 02 00:11:21 crc kubenswrapper[5108]: name: default-cluster Feb 02 00:11:21 crc kubenswrapper[5108]: contexts: Feb 02 00:11:21 crc kubenswrapper[5108]: - context: Feb 02 00:11:21 crc kubenswrapper[5108]: cluster: default-cluster Feb 02 00:11:21 crc kubenswrapper[5108]: namespace: default Feb 02 00:11:21 crc kubenswrapper[5108]: user: default-auth Feb 02 00:11:21 crc kubenswrapper[5108]: name: default-context Feb 02 00:11:21 crc kubenswrapper[5108]: current-context: default-context Feb 02 00:11:21 crc kubenswrapper[5108]: kind: Config Feb 02 00:11:21 crc kubenswrapper[5108]: preferences: {} Feb 02 00:11:21 crc kubenswrapper[5108]: users: Feb 02 00:11:21 crc kubenswrapper[5108]: - name: default-auth Feb 02 00:11:21 crc kubenswrapper[5108]: user: Feb 02 00:11:21 crc kubenswrapper[5108]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 02 00:11:21 crc kubenswrapper[5108]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 02 00:11:21 crc kubenswrapper[5108]: EOF Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfgl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-66k84_openshift-ovn-kubernetes(d0c5973e-49ea-41a0-87d5-c8e867ee8a66): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.369824 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: source "/env/_master" Feb 02 00:11:21 crc kubenswrapper[5108]: set +o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v4_join_subnet_opt= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v6_join_subnet_opt= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v4_transit_switch_subnet_opt= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v6_transit_switch_subnet_opt= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: dns_name_resolver_enabled_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # This is needed so that converting clusters from GA to TP Feb 02 00:11:21 crc kubenswrapper[5108]: # will rollout control plane pods as well Feb 02 00:11:21 crc kubenswrapper[5108]: network_segmentation_enabled_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: multi_network_enabled_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: multi_network_enabled_flag="--enable-multi-network" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "true" != "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: multi_network_enabled_flag="--enable-multi-network" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: route_advertisements_enable_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: preconfigured_udn_addresses_enable_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # Enable multi-network policy if configured (control-plane always full mode) Feb 02 00:11:21 crc kubenswrapper[5108]: multi_network_policy_enabled_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # Enable admin network policy if configured (control-plane always full mode) Feb 02 00:11:21 crc kubenswrapper[5108]: admin_network_policy_enabled_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: if [ "shared" == "shared" ]; then Feb 02 00:11:21 crc kubenswrapper[5108]: gateway_mode_flags="--gateway-mode shared" Feb 02 00:11:21 crc kubenswrapper[5108]: elif [ "shared" == "local" ]; then Feb 02 00:11:21 crc kubenswrapper[5108]: gateway_mode_flags="--gateway-mode local" Feb 02 00:11:21 crc kubenswrapper[5108]: else Feb 02 00:11:21 crc kubenswrapper[5108]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 02 00:11:21 crc kubenswrapper[5108]: exit 1 Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 02 00:11:21 crc kubenswrapper[5108]: exec /usr/bin/ovnkube \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-interconnect \ Feb 02 00:11:21 crc kubenswrapper[5108]: --init-cluster-manager "${K8S_NODE}" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 02 00:11:21 crc kubenswrapper[5108]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --metrics-bind-address "127.0.0.1:29108" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --metrics-enable-pprof \ Feb 02 00:11:21 crc kubenswrapper[5108]: --metrics-enable-config-duration \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${ovn_v4_join_subnet_opt} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${ovn_v6_join_subnet_opt} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${dns_name_resolver_enabled_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${persistent_ips_enabled_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${multi_network_enabled_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${network_segmentation_enabled_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${gateway_mode_flags} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${route_advertisements_enable_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${preconfigured_udn_addresses_enable_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-egress-ip=true \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-egress-firewall=true \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-egress-qos=true \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-egress-service=true \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-multicast \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-multi-external-gateway=true \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${multi_network_policy_enabled_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${admin_network_policy_enabled_flag} Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsmhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ccnbr_openshift-ovn-kubernetes(0298f7da-43a3-48a4-8e32-b772a82bd62d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.370185 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.370619 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.370998 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.370947 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-r6t6x" event={"ID":"ddd95e62-4b23-4887-b6e7-364a01924524","Type":"ContainerStarted","Data":"896942b9503dfb123e81fe12f3e839f49bd2881d35de050a50cfa0fc867bb9e6"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.371831 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"9997fcb85f88b9cc0029d5e0b7da92d29fdfbfbe05e37cdd43cb8ba96499fdc5"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.371908 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.373065 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 02 00:11:21 crc kubenswrapper[5108]: while [ true ]; Feb 02 00:11:21 crc kubenswrapper[5108]: do Feb 02 00:11:21 crc kubenswrapper[5108]: for f in $(ls /tmp/serviceca); do Feb 02 00:11:21 crc kubenswrapper[5108]: echo $f Feb 02 00:11:21 crc kubenswrapper[5108]: ca_file_path="/tmp/serviceca/${f}" Feb 02 00:11:21 crc kubenswrapper[5108]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 02 00:11:21 crc kubenswrapper[5108]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 02 00:11:21 crc kubenswrapper[5108]: if [ -e "${reg_dir_path}" ]; then Feb 02 00:11:21 crc kubenswrapper[5108]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 02 00:11:21 crc kubenswrapper[5108]: else Feb 02 00:11:21 crc kubenswrapper[5108]: mkdir $reg_dir_path Feb 02 00:11:21 crc kubenswrapper[5108]: cp $ca_file_path $reg_dir_path/ca.crt Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: for d in $(ls /etc/docker/certs.d); do Feb 02 00:11:21 crc kubenswrapper[5108]: echo $d Feb 02 00:11:21 crc kubenswrapper[5108]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 02 00:11:21 crc kubenswrapper[5108]: reg_conf_path="/tmp/serviceca/${dp}" Feb 02 00:11:21 crc kubenswrapper[5108]: if [ ! -e "${reg_conf_path}" ]; then Feb 02 00:11:21 crc kubenswrapper[5108]: rm -rf /etc/docker/certs.d/$d Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: sleep 60 & wait ${!} Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8fbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-r6t6x_openshift-image-registry(ddd95e62-4b23-4887-b6e7-364a01924524): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.373985 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: source "/env/_master" Feb 02 00:11:21 crc kubenswrapper[5108]: set +o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 02 00:11:21 crc kubenswrapper[5108]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 02 00:11:21 crc kubenswrapper[5108]: ho_enable="--enable-hybrid-overlay" Feb 02 00:11:21 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 02 00:11:21 crc kubenswrapper[5108]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 02 00:11:21 crc kubenswrapper[5108]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 02 00:11:21 crc kubenswrapper[5108]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 02 00:11:21 crc kubenswrapper[5108]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --webhook-host=127.0.0.1 \ Feb 02 00:11:21 crc kubenswrapper[5108]: --webhook-port=9743 \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${ho_enable} \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-interconnect \ Feb 02 00:11:21 crc kubenswrapper[5108]: --disable-approver \ Feb 02 00:11:21 crc kubenswrapper[5108]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --wait-for-kubernetes-api=200s \ Feb 02 00:11:21 crc kubenswrapper[5108]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --loglevel="${LOGLEVEL}" Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.374249 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-r6t6x" podUID="ddd95e62-4b23-4887-b6e7-364a01924524" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.376398 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: source "/env/_master" Feb 02 00:11:21 crc kubenswrapper[5108]: set +o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 02 00:11:21 crc kubenswrapper[5108]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 02 00:11:21 crc kubenswrapper[5108]: --disable-webhook \ Feb 02 00:11:21 crc kubenswrapper[5108]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --loglevel="${LOGLEVEL}" Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.377569 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.383470 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.388947 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.389020 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.389043 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.389075 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.389105 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.394023 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.403867 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.404054 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.405528 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.408100 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.408152 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.408169 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.408193 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.408210 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.423718 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.434028 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.449682 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.458764 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.471429 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.485799 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.502668 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.511335 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.511414 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.511435 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.511465 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.511485 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.530101 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.543184 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.557944 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.567578 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.569379 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.569644 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.583813 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.593543 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.595799 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.600906 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.606918 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.609575 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.614147 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.614215 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.614248 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.614290 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.614302 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.614551 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.619890 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.622757 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.633036 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.645675 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.647362 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.653851 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.657215 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.668358 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.669004 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.669211 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.674981 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.675919 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.678770 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.680177 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.683930 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.686074 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.686288 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.688893 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.692788 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.697855 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.699483 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.703796 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.709696 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.710302 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.715949 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.716815 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.716858 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.716867 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.716885 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.716897 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.719166 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.722561 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.724064 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.731763 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.732689 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.736333 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.737937 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.738021 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.748601 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.750771 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.753604 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.758935 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.763688 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.764511 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.771387 5108 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.771534 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.784801 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.784939 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.797106 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.797720 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.801403 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.805850 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.806765 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.807972 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.813551 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.814573 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.815859 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.819367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.819439 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.819459 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.819488 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.819507 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.819815 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.821801 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.834168 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.835557 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.836311 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.838482 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.839994 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.841795 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.843623 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.845995 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.847557 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.849159 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.864763 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.866000 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.865996 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.866016 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.866119 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.866170 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.866486 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.866524 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.866385 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.875217 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.913600 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.923108 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.923305 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.923371 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.923442 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.923499 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.954154 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.002971 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.027112 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.027183 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.027202 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.027263 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.027284 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.040130 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.095106 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.115415 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.130334 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.130776 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.131491 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.132078 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.132431 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.156780 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.158593 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.158669 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.158687 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.158800 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:24.158771359 +0000 UTC m=+83.434268469 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.163170 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.199536 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.236423 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.236895 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.239149 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.238964 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.239422 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.239905 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.258361 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.258588 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.258676 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.258717 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.258946 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.258985 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.259007 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.259074 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:11:24.259040215 +0000 UTC m=+83.534537175 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.259117 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:24.259103726 +0000 UTC m=+83.534600686 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.259157 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.259349 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:24.259301162 +0000 UTC m=+83.534798212 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.259822 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.260046 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:24.26001701 +0000 UTC m=+83.535513980 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.280026 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.308274 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.309475 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.309802 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.314983 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.342846 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.342906 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.342922 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.342944 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.342958 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.357095 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.359566 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.359756 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.359831 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:24.359809343 +0000 UTC m=+83.635306273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.393302 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.435209 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.445261 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.445350 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.445400 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.445449 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.445499 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.472506 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.512284 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.548061 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.548131 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.548150 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.548173 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.548187 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.554547 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.594706 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.638817 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.651031 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.651098 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.651117 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.651141 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.651159 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.688549 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.714762 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.754825 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.754867 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.754881 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.754901 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.754972 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.755961 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.793774 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.834541 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.857543 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.857598 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.857619 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.857646 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.857664 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.874252 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.914962 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.961031 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.961099 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.961118 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.961144 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.961166 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.063831 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.063898 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.063918 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.063942 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.063960 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.166502 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.166570 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.166584 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.166606 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.166623 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.269877 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.269930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.269943 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.269962 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.269976 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.373402 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.373730 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.373835 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.373947 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.374036 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.477409 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.477466 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.477484 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.477537 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.477558 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.557639 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.557703 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.557673 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:23 crc kubenswrapper[5108]: E0202 00:11:23.557901 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.557934 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:23 crc kubenswrapper[5108]: E0202 00:11:23.558042 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:23 crc kubenswrapper[5108]: E0202 00:11:23.558107 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:23 crc kubenswrapper[5108]: E0202 00:11:23.558134 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.582682 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.583092 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.583342 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.583739 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.584273 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.688355 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.688670 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.688810 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.688933 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.689057 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.791617 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.791666 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.791683 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.791707 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.791722 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.894589 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.894876 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.895045 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.895331 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.895476 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.997811 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.998732 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.998818 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.998905 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.999027 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.101089 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.101144 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.101154 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.101169 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.101178 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.183325 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.183516 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.183534 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.183546 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.183602 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:28.183584128 +0000 UTC m=+87.459081058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.203806 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.203862 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.203873 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.203893 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.203905 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.284694 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.284821 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.284898 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.284912 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:11:28.28486822 +0000 UTC m=+87.560365180 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.284974 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:28.284952952 +0000 UTC m=+87.560449882 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.285013 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.285082 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.285214 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.285287 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:28.28527537 +0000 UTC m=+87.560772300 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.285310 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.285745 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.285762 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.285924 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:28.285912497 +0000 UTC m=+87.561409427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.306657 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.306739 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.306759 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.306788 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.306806 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.385772 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.385981 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.386096 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:28.386069179 +0000 UTC m=+87.661566109 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.409751 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.409799 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.409809 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.409825 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.409841 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.513104 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.513145 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.513155 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.513174 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.513184 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.614785 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.614833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.614843 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.614860 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.614873 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.718111 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.718189 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.718207 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.718247 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.718264 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.821874 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.821950 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.821970 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.821999 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.822022 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.925368 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.925444 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.925469 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.925503 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.925530 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.028185 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.028318 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.028337 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.028364 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.028381 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.130918 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.131151 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.131160 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.131176 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.131185 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.233666 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.233776 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.233787 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.233821 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.233832 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.336447 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.336499 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.336509 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.336526 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.336538 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.438960 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.439032 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.439051 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.439077 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.439095 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.542090 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.542159 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.542171 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.542189 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.542199 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.557302 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.557348 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.557302 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:25 crc kubenswrapper[5108]: E0202 00:11:25.557459 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:25 crc kubenswrapper[5108]: E0202 00:11:25.557522 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:25 crc kubenswrapper[5108]: E0202 00:11:25.557683 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.557782 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:25 crc kubenswrapper[5108]: E0202 00:11:25.557877 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.645329 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.645401 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.645415 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.645438 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.645454 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.747875 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.747928 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.747942 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.747961 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.747974 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.850471 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.850559 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.850584 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.850615 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.850638 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.952654 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.952713 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.952723 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.952739 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.952749 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.054670 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.054720 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.054734 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.054765 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.054778 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.157192 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.157302 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.157320 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.157377 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.157395 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.260301 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.260383 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.260401 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.260427 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.260445 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.363145 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.363258 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.363277 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.363304 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.363323 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.466015 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.466107 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.466130 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.466162 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.466186 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.569163 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.569259 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.569280 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.569310 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.569324 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.671551 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.671593 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.671604 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.671619 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.671629 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.773834 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.773883 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.773896 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.773947 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.773960 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.876752 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.876809 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.876818 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.876835 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.876846 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.980758 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.980848 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.980868 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.980895 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.980917 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.084386 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.084482 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.084566 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.084599 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.084624 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.187891 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.187983 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.188007 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.188043 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.188070 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.290508 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.290601 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.290627 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.290664 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.290688 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.392541 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.392606 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.392626 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.392652 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.392670 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.495547 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.495617 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.495635 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.495660 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.495680 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.563162 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.563215 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.563167 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:27 crc kubenswrapper[5108]: E0202 00:11:27.563328 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.563382 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:27 crc kubenswrapper[5108]: E0202 00:11:27.563521 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:27 crc kubenswrapper[5108]: E0202 00:11:27.563554 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:27 crc kubenswrapper[5108]: E0202 00:11:27.563628 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.598652 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.598733 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.598754 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.598783 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.598803 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.701352 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.701426 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.701446 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.701472 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.701491 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.804607 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.804668 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.804687 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.804713 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.804731 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.906986 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.907082 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.907103 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.907135 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.907153 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.009951 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.010104 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.010212 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.010303 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.010334 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.114628 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.114822 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.114848 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.114913 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.114942 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.218455 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.218526 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.218549 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.218575 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.218594 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.235798 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.236110 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.236196 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.236224 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.236433 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:36.236385658 +0000 UTC m=+95.511882638 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.322517 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.322566 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.322579 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.322593 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.322603 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.337863 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338005 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:11:36.337978299 +0000 UTC m=+95.613475229 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.338174 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.338222 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.338280 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338425 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338498 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338548 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:36.338515053 +0000 UTC m=+95.614012023 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338659 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:36.338622555 +0000 UTC m=+95.614119685 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338823 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338865 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338887 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338943 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:36.338929333 +0000 UTC m=+95.614426463 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.425700 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.425768 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.425788 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.425816 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.425834 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.439264 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.439438 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.439516 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:36.439496507 +0000 UTC m=+95.714993437 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.529460 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.529562 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.529583 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.529638 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.529659 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.632863 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.632935 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.632953 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.632980 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.632999 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.736221 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.736378 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.736393 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.736412 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.736426 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.839307 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.839366 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.839378 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.839402 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.839414 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.942397 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.942470 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.942488 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.942512 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.942531 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.046035 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.046131 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.046150 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.046182 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.046399 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.149932 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.149999 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.150016 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.150046 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.150064 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.253043 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.253107 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.253125 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.253154 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.253173 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.356259 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.356326 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.356349 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.356378 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.356399 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.459354 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.459450 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.459476 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.459506 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.459526 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.557144 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.557386 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.557445 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:29 crc kubenswrapper[5108]: E0202 00:11:29.557600 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.557630 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:29 crc kubenswrapper[5108]: E0202 00:11:29.557397 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:29 crc kubenswrapper[5108]: E0202 00:11:29.557830 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:29 crc kubenswrapper[5108]: E0202 00:11:29.558005 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.562472 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.562609 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.562674 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.562709 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.562735 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.666067 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.666141 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.666159 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.666185 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.666204 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.768731 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.768830 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.768856 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.768893 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.768914 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.871291 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.871380 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.871400 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.871426 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.871448 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.974158 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.974218 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.974260 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.974279 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.974290 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.076678 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.076752 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.076772 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.076800 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.076820 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.179674 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.179743 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.179761 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.179787 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.179804 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.282068 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.282124 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.282142 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.282165 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.282184 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.384429 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.384478 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.384488 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.384502 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.384510 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.487375 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.487430 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.487442 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.487466 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.487479 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.589821 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.589886 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.589904 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.589931 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.589949 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.693177 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.693308 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.693331 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.693363 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.693389 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.796578 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.796660 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.796679 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.796701 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.796714 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.899646 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.899739 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.899777 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.899810 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.899830 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.987341 5108 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.003347 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.003444 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.003464 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.003496 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.003516 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.106126 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.106215 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.106275 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.106315 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.106340 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.208717 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.208768 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.208785 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.208805 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.208821 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.311769 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.311828 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.311842 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.311860 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.311874 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.414653 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.414768 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.414789 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.414820 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.414840 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.517393 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.517451 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.517469 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.517500 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.517519 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.556572 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.556624 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.556797 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.556986 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.557143 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.557912 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.557995 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.558355 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.566960 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:31 crc kubenswrapper[5108]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 02 00:11:31 crc kubenswrapper[5108]: apiVersion: v1 Feb 02 00:11:31 crc kubenswrapper[5108]: clusters: Feb 02 00:11:31 crc kubenswrapper[5108]: - cluster: Feb 02 00:11:31 crc kubenswrapper[5108]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 02 00:11:31 crc kubenswrapper[5108]: server: https://api-int.crc.testing:6443 Feb 02 00:11:31 crc kubenswrapper[5108]: name: default-cluster Feb 02 00:11:31 crc kubenswrapper[5108]: contexts: Feb 02 00:11:31 crc kubenswrapper[5108]: - context: Feb 02 00:11:31 crc kubenswrapper[5108]: cluster: default-cluster Feb 02 00:11:31 crc kubenswrapper[5108]: namespace: default Feb 02 00:11:31 crc kubenswrapper[5108]: user: default-auth Feb 02 00:11:31 crc kubenswrapper[5108]: name: default-context Feb 02 00:11:31 crc kubenswrapper[5108]: current-context: default-context Feb 02 00:11:31 crc kubenswrapper[5108]: kind: Config Feb 02 00:11:31 crc kubenswrapper[5108]: preferences: {} Feb 02 00:11:31 crc kubenswrapper[5108]: users: Feb 02 00:11:31 crc kubenswrapper[5108]: - name: default-auth Feb 02 00:11:31 crc kubenswrapper[5108]: user: Feb 02 00:11:31 crc kubenswrapper[5108]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 02 00:11:31 crc kubenswrapper[5108]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 02 00:11:31 crc kubenswrapper[5108]: EOF Feb 02 00:11:31 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfgl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-66k84_openshift-ovn-kubernetes(d0c5973e-49ea-41a0-87d5-c8e867ee8a66): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:31 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.569718 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.581604 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.598454 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.615751 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.620101 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.620164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.620184 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.620209 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.620262 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.628766 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.642002 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.672534 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.684139 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.691814 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.691930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.691953 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.692006 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.692026 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.696198 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.707684 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.711789 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.711846 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.711865 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.711887 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.711906 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.712139 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.728122 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.728419 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.740065 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.750987 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.767297 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.780605 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.780662 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.780679 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.780699 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.780712 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.784323 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.795955 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.800886 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.800936 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.800948 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.800966 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.800979 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.804159 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.815339 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.816629 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.824996 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.825038 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.825079 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.825097 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.825110 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.830610 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.836270 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.836448 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.838581 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.838673 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.838734 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.838753 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.838784 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.840541 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.851963 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.940950 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.941016 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.941029 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.941047 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.941080 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.044169 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.044257 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.044272 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.044290 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.044304 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.146818 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.146892 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.146905 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.146927 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.146959 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.250172 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.250298 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.250319 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.250345 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.250362 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.353129 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.353201 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.353220 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.353357 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.353378 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.456531 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.456601 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.456619 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.456645 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.456666 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.481223 5108 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.558884 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.558939 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.558951 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.558967 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.558980 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: E0202 00:11:32.559330 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-d74m7_openshift-machine-config-operator(93334c92-cf5f-4978-b891-2b8e5ea35025): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:32 crc kubenswrapper[5108]: E0202 00:11:32.561599 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-d74m7_openshift-machine-config-operator(93334c92-cf5f-4978-b891-2b8e5ea35025): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:32 crc kubenswrapper[5108]: E0202 00:11:32.562764 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.660913 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.660957 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.660970 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.660992 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.661003 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.763071 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.763127 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.763136 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.763151 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.763398 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.785405 5108 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.865746 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.865780 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.865788 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.865801 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.865811 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.967723 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.967806 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.967832 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.967863 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.967887 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.070833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.070926 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.070944 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.070965 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.070977 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.173579 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.173639 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.173652 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.173670 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.173681 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.275809 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.275878 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.275903 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.275932 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.275954 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.378801 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.378873 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.378884 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.378900 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.378910 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.481597 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.481652 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.481663 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.481680 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.481692 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.556795 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.556842 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.556968 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:33 crc kubenswrapper[5108]: E0202 00:11:33.556984 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:33 crc kubenswrapper[5108]: E0202 00:11:33.557320 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.557502 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:33 crc kubenswrapper[5108]: E0202 00:11:33.557902 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:33 crc kubenswrapper[5108]: E0202 00:11:33.557176 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.583755 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.583819 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.583832 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.583849 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.583861 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.685931 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.685975 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.685989 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.686009 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.686020 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.788656 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.788708 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.788724 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.788741 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.789032 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.891214 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.891278 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.891287 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.891304 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.891313 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.994197 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.994246 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.994256 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.994294 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.994307 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.097367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.097448 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.097468 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.097496 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.097515 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.199914 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.199967 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.199979 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.199997 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.200009 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.302481 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.302546 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.302558 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.302578 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.302594 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.405936 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.406022 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.406044 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.406073 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.406095 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.413790 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q22wv" event={"ID":"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce","Type":"ContainerStarted","Data":"9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.432351 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.446560 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.462419 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.477497 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.488391 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.499587 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.509290 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.509351 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.509360 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.509375 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.509385 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.521665 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.538283 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.566837 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.608819 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.622537 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.622638 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.622653 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.622673 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.622685 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.629187 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.642351 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.650087 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.660832 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.669461 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.682355 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.692989 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.708599 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.718644 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.725312 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.725344 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.725353 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.725367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.725376 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.827683 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.827733 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.827743 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.827762 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.827773 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.931602 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.931662 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.931673 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.931699 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.931709 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.034126 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.034744 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.034788 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.034806 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.034817 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.136565 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.136607 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.136615 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.136631 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.136640 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.239939 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.240003 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.240020 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.240040 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.240052 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.354828 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.354878 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.354888 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.354904 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.354916 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.421373 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerStarted","Data":"15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.436722 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.456328 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.457871 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.457923 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.457939 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.457961 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.457977 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.475415 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.503134 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.518751 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.537616 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.550262 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.556665 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.556669 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.556670 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:35 crc kubenswrapper[5108]: E0202 00:11:35.556900 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:35 crc kubenswrapper[5108]: E0202 00:11:35.556993 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:35 crc kubenswrapper[5108]: E0202 00:11:35.557464 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.557676 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:35 crc kubenswrapper[5108]: E0202 00:11:35.557826 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.558731 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:11:35 crc kubenswrapper[5108]: E0202 00:11:35.558891 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.561300 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.561348 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.561368 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.561394 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.561414 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.578737 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.598277 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.614340 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.627155 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.639139 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.651649 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.681817 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.681926 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.681956 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.681993 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.682020 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.695056 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.711869 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.730085 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.752508 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.773114 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.784663 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.784737 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.784763 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.784795 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.784965 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.789791 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.888833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.888886 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.888898 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.888918 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.888937 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.991830 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.991925 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.991948 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.991987 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.992015 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.094737 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.094788 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.094802 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.094821 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.094833 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.197089 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.197156 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.197168 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.197188 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.197206 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.243414 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.243725 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.243774 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.243788 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.243880 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:52.243856779 +0000 UTC m=+111.519353709 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.299856 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.299915 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.299934 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.299958 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.299976 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.344406 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.344551 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.344606 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.344633 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.344749 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:11:52.344670829 +0000 UTC m=+111.620167769 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.344811 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.344831 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.344847 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.344854 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.344919 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.345004 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:52.344973267 +0000 UTC m=+111.620470427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.345033 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:52.345021628 +0000 UTC m=+111.620518568 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.345085 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:52.345046829 +0000 UTC m=+111.620543799 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.403081 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.403147 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.403164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.403195 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.403208 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.426657 5108 generic.go:358] "Generic (PLEG): container finished" podID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" containerID="15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3" exitCode=0 Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.426760 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerDied","Data":"15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.429543 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xdw92" event={"ID":"f5434f05-9acb-4d0c-a175-d5efc97194da","Type":"ContainerStarted","Data":"22e2a143e93948ce93981443bd6a4c85d0496e1b5144a763c304fc600225a6d1"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.431157 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-r6t6x" event={"ID":"ddd95e62-4b23-4887-b6e7-364a01924524","Type":"ContainerStarted","Data":"591f87cda3af3c29bd84b8ad7eb421f7243aa4ec7525512c379d920df7069119"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.434827 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"ab0f2b650398839efb319e4d55c18cc6d56404982fbd82913f7515041dfbbba9"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.434969 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"879ce06a2cae6424fd3915643915f9404b42efdff9a788044d1d7b368c644cc4"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.445807 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.446040 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.446127 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:52.446106815 +0000 UTC m=+111.721603755 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.449095 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.466357 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.477674 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.491111 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.508051 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.508140 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.508160 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.508190 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.508213 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.508571 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.528255 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.551486 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.565025 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.585071 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.601820 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.610785 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.610888 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.610916 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.610965 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.610995 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.624158 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.646137 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.664388 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.678577 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.691986 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.705135 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.713432 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.713485 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.713495 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.713518 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.713530 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.749624 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.762259 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.774065 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.788980 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.805890 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.815422 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.815485 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.815498 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.815516 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.815552 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.821107 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.835561 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.847943 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://22e2a143e93948ce93981443bd6a4c85d0496e1b5144a763c304fc600225a6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.860188 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.880314 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.891601 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.904254 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.915648 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.919126 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.919178 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.919191 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.919211 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.919325 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.928320 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.937993 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://591f87cda3af3c29bd84b8ad7eb421f7243aa4ec7525512c379d920df7069119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.948160 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.959022 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.971119 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.991785 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.002462 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.014638 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.022040 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.022091 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.022103 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.022119 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.022130 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.024951 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.124566 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.124615 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.124628 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.124646 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.124662 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.228156 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.228784 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.228989 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.229260 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.229503 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.332555 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.332620 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.332634 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.332654 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.332668 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.436037 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.436100 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.436112 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.436136 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.436150 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.440823 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"cb11ed559484d3cfe33ff0dee1351623d3707756e0b564e080a789719b6b19bd"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.443986 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerStarted","Data":"976b7c960dc45b34c63bbb69faf38320c43249f1704bfb4265d24cffa187c7ef"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.446658 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" event={"ID":"0298f7da-43a3-48a4-8e32-b772a82bd62d","Type":"ContainerStarted","Data":"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.446731 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" event={"ID":"0298f7da-43a3-48a4-8e32-b772a82bd62d","Type":"ContainerStarted","Data":"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.538994 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.539074 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.539095 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.539134 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.539154 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.559096 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.559096 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.559812 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:37 crc kubenswrapper[5108]: E0202 00:11:37.559849 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:37 crc kubenswrapper[5108]: E0202 00:11:37.560129 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:37 crc kubenswrapper[5108]: E0202 00:11:37.560542 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.560610 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:37 crc kubenswrapper[5108]: E0202 00:11:37.560733 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.637590 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=17.637560925 podStartE2EDuration="17.637560925s" podCreationTimestamp="2026-02-02 00:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.635556502 +0000 UTC m=+96.911053452" watchObservedRunningTime="2026-02-02 00:11:37.637560925 +0000 UTC m=+96.913057895" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.641281 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.641394 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.641416 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.641445 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.641469 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.691032 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-q22wv" podStartSLOduration=73.69100334 podStartE2EDuration="1m13.69100334s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.673911137 +0000 UTC m=+96.949408087" watchObservedRunningTime="2026-02-02 00:11:37.69100334 +0000 UTC m=+96.966500270" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.691529 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xdw92" podStartSLOduration=73.691524734 podStartE2EDuration="1m13.691524734s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.690581999 +0000 UTC m=+96.966078959" watchObservedRunningTime="2026-02-02 00:11:37.691524734 +0000 UTC m=+96.967021664" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.742179 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.742156775 podStartE2EDuration="18.742156775s" podCreationTimestamp="2026-02-02 00:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.740660205 +0000 UTC m=+97.016157145" watchObservedRunningTime="2026-02-02 00:11:37.742156775 +0000 UTC m=+97.017653715" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.747825 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.748043 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.748180 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.748354 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.748466 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.813272 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=17.813219656 podStartE2EDuration="17.813219656s" podCreationTimestamp="2026-02-02 00:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.797749896 +0000 UTC m=+97.073246856" watchObservedRunningTime="2026-02-02 00:11:37.813219656 +0000 UTC m=+97.088716596" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.829323 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-r6t6x" podStartSLOduration=73.829297352 podStartE2EDuration="1m13.829297352s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.829090677 +0000 UTC m=+97.104587657" watchObservedRunningTime="2026-02-02 00:11:37.829297352 +0000 UTC m=+97.104794292" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.842428 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.842398189 podStartE2EDuration="17.842398189s" podCreationTimestamp="2026-02-02 00:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.842018899 +0000 UTC m=+97.117515839" watchObservedRunningTime="2026-02-02 00:11:37.842398189 +0000 UTC m=+97.117895139" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.851423 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.851494 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.851508 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.851529 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.851545 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.904756 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" podStartSLOduration=73.90471697 podStartE2EDuration="1m13.90471697s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.902829029 +0000 UTC m=+97.178325969" watchObservedRunningTime="2026-02-02 00:11:37.90471697 +0000 UTC m=+97.180213920" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.954308 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.954342 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.954352 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.954371 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.954381 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.056937 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.056988 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.057000 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.057019 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.057037 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.159925 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.160013 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.160033 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.160064 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.160084 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.265957 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.267600 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.267755 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.267923 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.268031 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.372337 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.372774 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.372885 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.372980 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.373065 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.453346 5108 generic.go:358] "Generic (PLEG): container finished" podID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" containerID="976b7c960dc45b34c63bbb69faf38320c43249f1704bfb4265d24cffa187c7ef" exitCode=0 Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.453446 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerDied","Data":"976b7c960dc45b34c63bbb69faf38320c43249f1704bfb4265d24cffa187c7ef"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.455904 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"f33b7fd2bdc58b68b66921615ba814d34a08b3b014ce87d7568901c5e8827ab6"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.475390 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.475553 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.475578 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.475606 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.475627 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.578071 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.578161 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.578188 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.578217 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.578268 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.680933 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.680991 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.681007 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.681024 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.681036 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.787791 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.788333 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.788348 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.788369 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.788382 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.890755 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.890805 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.890814 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.890832 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.890842 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.993754 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.993812 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.993833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.993859 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.993880 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.096010 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.096104 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.096127 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.096155 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.096174 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.198980 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.199066 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.199095 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.199130 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.199157 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.302103 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.302161 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.302182 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.302210 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.302267 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.404806 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.404877 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.404896 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.404920 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.404935 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.462188 5108 generic.go:358] "Generic (PLEG): container finished" podID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" containerID="0f008faf256631411f3e436dcbb8c373c8041ea92bcc52571fdec0ad03f45ff6" exitCode=0 Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.462256 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerDied","Data":"0f008faf256631411f3e436dcbb8c373c8041ea92bcc52571fdec0ad03f45ff6"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.506568 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.506619 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.506632 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.506650 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.506660 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.566597 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:39 crc kubenswrapper[5108]: E0202 00:11:39.566805 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.567570 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:39 crc kubenswrapper[5108]: E0202 00:11:39.567753 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.567914 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:39 crc kubenswrapper[5108]: E0202 00:11:39.568046 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.568153 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:39 crc kubenswrapper[5108]: E0202 00:11:39.568334 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.609378 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.609436 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.609451 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.609473 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.609488 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.713250 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.713293 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.713305 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.713326 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.713337 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.816105 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.816181 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.816203 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.816312 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.816336 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.921571 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.921637 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.921654 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.921672 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.921685 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.024869 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.025278 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.025367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.025463 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.025532 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.127609 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.127901 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.128056 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.128175 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.128327 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.233565 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.233640 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.233659 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.233688 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.233709 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.336035 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.336096 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.336112 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.336134 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.336150 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.439832 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.439918 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.439937 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.439965 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.439988 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.468883 5108 generic.go:358] "Generic (PLEG): container finished" podID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" containerID="c8a240fc2274e69a855a1db85ba3f09c991ead80a19c23dff1b81ff2455db9ea" exitCode=0 Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.468965 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerDied","Data":"c8a240fc2274e69a855a1db85ba3f09c991ead80a19c23dff1b81ff2455db9ea"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.542791 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.542859 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.542874 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.542897 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.542922 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.644462 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.644501 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.644510 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.644524 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.644534 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.747187 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.747237 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.747268 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.747283 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.747292 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.849880 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.849927 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.849941 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.849960 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.849974 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.955158 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.955284 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.955305 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.955335 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.955355 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.061952 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.062028 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.062046 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.062073 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.062091 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.165406 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.165478 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.165494 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.165521 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.165537 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.276425 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.276485 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.276500 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.276519 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.293714 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.396687 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.396769 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.396794 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.396833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.396856 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.478435 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerStarted","Data":"85fe5dfe261ea98fd7dad0878bb19fe9ffd26af63b2d211af07186d1d412a23a"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.500108 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.500182 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.500209 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.500283 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.500307 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.558801 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:41 crc kubenswrapper[5108]: E0202 00:11:41.559034 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.559040 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:41 crc kubenswrapper[5108]: E0202 00:11:41.559191 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.559223 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.559284 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:41 crc kubenswrapper[5108]: E0202 00:11:41.559381 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:41 crc kubenswrapper[5108]: E0202 00:11:41.559470 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.606220 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.606341 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.606367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.606405 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.606436 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.709819 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.709883 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.709901 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.709923 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.709937 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.813506 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.813586 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.813605 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.813635 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.813653 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.916146 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.916206 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.916219 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.916264 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.916279 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.019334 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.019381 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.019391 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.019408 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.019421 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:42Z","lastTransitionTime":"2026-02-02T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.122491 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.122550 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.122564 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.122588 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.122606 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:42Z","lastTransitionTime":"2026-02-02T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.150085 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.150158 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.150171 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.150192 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.150203 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:42Z","lastTransitionTime":"2026-02-02T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.214023 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g"] Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.361188 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.365682 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.365752 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.366055 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.366365 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.423397 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.423466 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.423630 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.423780 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.423979 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.487126 5108 generic.go:358] "Generic (PLEG): container finished" podID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" containerID="85fe5dfe261ea98fd7dad0878bb19fe9ffd26af63b2d211af07186d1d412a23a" exitCode=0 Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.487209 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerDied","Data":"85fe5dfe261ea98fd7dad0878bb19fe9ffd26af63b2d211af07186d1d412a23a"} Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.526645 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.526728 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.526799 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.526925 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.526952 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.526973 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.527969 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.528836 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.535145 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.544391 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.545256 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.565769 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.685779 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: W0202 00:11:42.715044 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cd16b6d_22dc_4e5a_a206_6b8eab5a0533.slice/crio-5c8bddf74e03cc03721e912aa458f33a0231b71ed8166f9f057257e5015e477f WatchSource:0}: Error finding container 5c8bddf74e03cc03721e912aa458f33a0231b71ed8166f9f057257e5015e477f: Status 404 returned error can't find the container with id 5c8bddf74e03cc03721e912aa458f33a0231b71ed8166f9f057257e5015e477f Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.492702 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" event={"ID":"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533","Type":"ContainerStarted","Data":"fdb42cb6daa4e93dd1ebd4524856070c6775adea89e74ffcbaf6faa2ea1f682d"} Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.492784 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" event={"ID":"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533","Type":"ContainerStarted","Data":"5c8bddf74e03cc03721e912aa458f33a0231b71ed8166f9f057257e5015e477f"} Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.498914 5108 generic.go:358] "Generic (PLEG): container finished" podID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" containerID="b92f2e96de651da46e45924d3aa1ff4c8a9c2f7877090b4baa708056e8b41f50" exitCode=0 Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.498994 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerDied","Data":"b92f2e96de651da46e45924d3aa1ff4c8a9c2f7877090b4baa708056e8b41f50"} Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.540740 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" podStartSLOduration=79.540706712 podStartE2EDuration="1m19.540706712s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:43.512483194 +0000 UTC m=+102.787980164" watchObservedRunningTime="2026-02-02 00:11:43.540706712 +0000 UTC m=+102.816203662" Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.557428 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.557514 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.557564 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.557748 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:43 crc kubenswrapper[5108]: E0202 00:11:43.557739 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:43 crc kubenswrapper[5108]: E0202 00:11:43.557971 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:43 crc kubenswrapper[5108]: E0202 00:11:43.558130 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:43 crc kubenswrapper[5108]: E0202 00:11:43.558284 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:44 crc kubenswrapper[5108]: I0202 00:11:44.522791 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerStarted","Data":"469e6bc3fd7bc3862cd77ae516c5cd503e5c6cf68a260b443b2b257ab6fcd60f"} Feb 02 00:11:44 crc kubenswrapper[5108]: I0202 00:11:44.554466 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gbldp" podStartSLOduration=80.554442486 podStartE2EDuration="1m20.554442486s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:44.552768782 +0000 UTC m=+103.828265752" watchObservedRunningTime="2026-02-02 00:11:44.554442486 +0000 UTC m=+103.829939496" Feb 02 00:11:45 crc kubenswrapper[5108]: I0202 00:11:45.556966 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:45 crc kubenswrapper[5108]: E0202 00:11:45.557907 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:45 crc kubenswrapper[5108]: I0202 00:11:45.557039 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:45 crc kubenswrapper[5108]: E0202 00:11:45.558185 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:45 crc kubenswrapper[5108]: I0202 00:11:45.557036 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:45 crc kubenswrapper[5108]: E0202 00:11:45.558381 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:45 crc kubenswrapper[5108]: I0202 00:11:45.557108 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:45 crc kubenswrapper[5108]: E0202 00:11:45.558752 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:47 crc kubenswrapper[5108]: I0202 00:11:47.534404 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" exitCode=0 Feb 02 00:11:47 crc kubenswrapper[5108]: I0202 00:11:47.534634 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} Feb 02 00:11:47 crc kubenswrapper[5108]: I0202 00:11:47.562347 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:47 crc kubenswrapper[5108]: E0202 00:11:47.562541 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:47 crc kubenswrapper[5108]: I0202 00:11:47.562546 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:47 crc kubenswrapper[5108]: I0202 00:11:47.562875 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:47 crc kubenswrapper[5108]: E0202 00:11:47.563211 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:47 crc kubenswrapper[5108]: E0202 00:11:47.563324 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:47 crc kubenswrapper[5108]: I0202 00:11:47.563401 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:47 crc kubenswrapper[5108]: E0202 00:11:47.563486 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.542062 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"af976d2979a45794a11c98dae39890ecd1007c20716cbc8d4471c47d5d6c31ee"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.542739 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"7fc8656729a54679c3362014ce0e7b635c6707581fd8f75d82363290e04cf73f"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.548886 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.549084 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.549207 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.549359 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.549498 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.549630 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.557900 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:11:48 crc kubenswrapper[5108]: E0202 00:11:48.558419 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.563927 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podStartSLOduration=84.563898509 podStartE2EDuration="1m24.563898509s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:48.56359118 +0000 UTC m=+107.839088120" watchObservedRunningTime="2026-02-02 00:11:48.563898509 +0000 UTC m=+107.839395449" Feb 02 00:11:49 crc kubenswrapper[5108]: I0202 00:11:49.556621 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:49 crc kubenswrapper[5108]: I0202 00:11:49.556700 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:49 crc kubenswrapper[5108]: E0202 00:11:49.557803 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:49 crc kubenswrapper[5108]: I0202 00:11:49.556881 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:49 crc kubenswrapper[5108]: I0202 00:11:49.556728 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:49 crc kubenswrapper[5108]: E0202 00:11:49.558000 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:49 crc kubenswrapper[5108]: E0202 00:11:49.558239 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:49 crc kubenswrapper[5108]: E0202 00:11:49.558382 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:51 crc kubenswrapper[5108]: I0202 00:11:51.563963 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:51 crc kubenswrapper[5108]: I0202 00:11:51.564002 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:51 crc kubenswrapper[5108]: I0202 00:11:51.564069 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:51 crc kubenswrapper[5108]: E0202 00:11:51.564767 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:51 crc kubenswrapper[5108]: E0202 00:11:51.565035 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:51 crc kubenswrapper[5108]: E0202 00:11:51.565497 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:51 crc kubenswrapper[5108]: I0202 00:11:51.566473 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:51 crc kubenswrapper[5108]: E0202 00:11:51.566703 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:51 crc kubenswrapper[5108]: I0202 00:11:51.566765 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} Feb 02 00:11:52 crc kubenswrapper[5108]: I0202 00:11:52.254932 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.255159 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.255187 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.255247 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.255325 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:12:24.255304978 +0000 UTC m=+143.530801908 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:52 crc kubenswrapper[5108]: I0202 00:11:52.356357 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:52 crc kubenswrapper[5108]: I0202 00:11:52.356470 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.356586 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:24.356545729 +0000 UTC m=+143.632042679 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.356720 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.356849 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: I0202 00:11:52.356753 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.356888 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:12:24.356855038 +0000 UTC m=+143.632351978 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.357015 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:12:24.356982691 +0000 UTC m=+143.632479771 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: I0202 00:11:52.357084 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.357308 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.357338 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.357353 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.357420 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:12:24.357410702 +0000 UTC m=+143.632907632 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:52 crc kubenswrapper[5108]: I0202 00:11:52.458654 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.458861 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.458986 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:12:24.458956011 +0000 UTC m=+143.734452971 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.564387 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.564387 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:53 crc kubenswrapper[5108]: E0202 00:11:53.564978 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.564550 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.564529 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:53 crc kubenswrapper[5108]: E0202 00:11:53.565267 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:53 crc kubenswrapper[5108]: E0202 00:11:53.565406 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:53 crc kubenswrapper[5108]: E0202 00:11:53.565508 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.580974 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.581804 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.581848 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.581867 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.620615 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.630155 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podStartSLOduration=89.630126294 podStartE2EDuration="1m29.630126294s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:53.629839127 +0000 UTC m=+112.905336127" watchObservedRunningTime="2026-02-02 00:11:53.630126294 +0000 UTC m=+112.905623254" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.633530 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:55 crc kubenswrapper[5108]: I0202 00:11:55.321470 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-26ppl"] Feb 02 00:11:55 crc kubenswrapper[5108]: I0202 00:11:55.323122 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:55 crc kubenswrapper[5108]: E0202 00:11:55.323330 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:55 crc kubenswrapper[5108]: I0202 00:11:55.561504 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:55 crc kubenswrapper[5108]: I0202 00:11:55.561529 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:55 crc kubenswrapper[5108]: I0202 00:11:55.561598 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:55 crc kubenswrapper[5108]: E0202 00:11:55.562059 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:55 crc kubenswrapper[5108]: E0202 00:11:55.562100 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:55 crc kubenswrapper[5108]: E0202 00:11:55.561863 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:57 crc kubenswrapper[5108]: I0202 00:11:57.557556 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:57 crc kubenswrapper[5108]: E0202 00:11:57.557751 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:57 crc kubenswrapper[5108]: I0202 00:11:57.557804 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:57 crc kubenswrapper[5108]: I0202 00:11:57.557977 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:57 crc kubenswrapper[5108]: E0202 00:11:57.558193 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:57 crc kubenswrapper[5108]: E0202 00:11:57.558259 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:57 crc kubenswrapper[5108]: I0202 00:11:57.558521 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:57 crc kubenswrapper[5108]: E0202 00:11:57.558668 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.557346 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.557461 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.557355 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.557346 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:59 crc kubenswrapper[5108]: E0202 00:11:59.557626 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:59 crc kubenswrapper[5108]: E0202 00:11:59.557782 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:59 crc kubenswrapper[5108]: E0202 00:11:59.558021 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:59 crc kubenswrapper[5108]: E0202 00:11:59.558097 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.656719 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.657062 5108 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.731600 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.786979 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fc5pz"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.787620 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.790427 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.790803 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.790982 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.792501 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.792693 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.793155 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.794137 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29499840-njc6g"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.807428 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.808284 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.812616 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.813082 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.813301 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.813464 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.814050 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.814322 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.819168 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-wbv6f"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.819622 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.834106 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.834752 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.835219 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.843794 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-q88tw"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.854703 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.866343 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.866732 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.866975 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.867459 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.867593 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871644 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-client-ca\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871696 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6bb9533-ef42-4cf1-92de-3a011b1934b8-tmp\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871738 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6bb9533-ef42-4cf1-92de-3a011b1934b8-serving-cert\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871761 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7ndv\" (UniqueName: \"kubernetes.io/projected/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-kube-api-access-x7ndv\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871781 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaf16ae-d4df-42da-a1b5-03495d1ef713-serving-cert\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871809 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-machine-approver-tls\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871856 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-config\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872061 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872105 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-config\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872130 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfk4d\" (UniqueName: \"kubernetes.io/projected/c6bb9533-ef42-4cf1-92de-3a011b1934b8-kube-api-access-tfk4d\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872152 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572g4\" (UniqueName: \"kubernetes.io/projected/ebaf16ae-d4df-42da-a1b5-03495d1ef713-kube-api-access-572g4\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872176 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-auth-proxy-config\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872373 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l8sn\" (UniqueName: \"kubernetes.io/projected/dcbaa597-5b18-4219-b757-5f10e86a2c1c-kube-api-access-2l8sn\") pod \"image-pruner-29499840-njc6g\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872891 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ebaf16ae-d4df-42da-a1b5-03495d1ef713-tmp\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872939 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-config\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872962 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcbaa597-5b18-4219-b757-5f10e86a2c1c-serviceca\") pod \"image-pruner-29499840-njc6g\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.873199 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-client-ca\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.874542 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.879331 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-fn572"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.880047 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.884187 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.884959 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.885108 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.885335 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.885562 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.886758 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.887106 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.887280 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.887427 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.888051 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.895184 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.947970 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.948179 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.948418 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.953114 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.953291 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.953653 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.953736 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.953946 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954040 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954180 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954220 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954351 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954370 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954197 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954712 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954745 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954888 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.955074 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.973855 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-config\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.973892 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.973921 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-audit\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.973942 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.973965 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-config\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.973983 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbf5j\" (UniqueName: \"kubernetes.io/projected/8490096f-f230-4160-bb09-338c9fa9f7ca-kube-api-access-gbf5j\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974002 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tfk4d\" (UniqueName: \"kubernetes.io/projected/c6bb9533-ef42-4cf1-92de-3a011b1934b8-kube-api-access-tfk4d\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974130 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-572g4\" (UniqueName: \"kubernetes.io/projected/ebaf16ae-d4df-42da-a1b5-03495d1ef713-kube-api-access-572g4\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974215 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-auth-proxy-config\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974302 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8490096f-f230-4160-bb09-338c9fa9f7ca-node-pullsecrets\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974349 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2l8sn\" (UniqueName: \"kubernetes.io/projected/dcbaa597-5b18-4219-b757-5f10e86a2c1c-kube-api-access-2l8sn\") pod \"image-pruner-29499840-njc6g\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974381 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ebaf16ae-d4df-42da-a1b5-03495d1ef713-tmp\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974414 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-serving-cert\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974447 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974536 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-config\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974577 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-encryption-config\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974612 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/688cb527-1d6f-4e22-9b14-4718201c8343-images\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974659 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcbaa597-5b18-4219-b757-5f10e86a2c1c-serviceca\") pod \"image-pruner-29499840-njc6g\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974703 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-etcd-client\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974759 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-client-ca\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974808 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49sqd\" (UniqueName: \"kubernetes.io/projected/688cb527-1d6f-4e22-9b14-4718201c8343-kube-api-access-49sqd\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974865 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-client-ca\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974907 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8490096f-f230-4160-bb09-338c9fa9f7ca-audit-dir\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975060 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688cb527-1d6f-4e22-9b14-4718201c8343-config\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975105 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-auth-proxy-config\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975126 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6bb9533-ef42-4cf1-92de-3a011b1934b8-tmp\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975113 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975110 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-config\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975686 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6bb9533-ef42-4cf1-92de-3a011b1934b8-tmp\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975701 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ebaf16ae-d4df-42da-a1b5-03495d1ef713-tmp\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975745 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6bb9533-ef42-4cf1-92de-3a011b1934b8-serving-cert\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975893 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-client-ca\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975988 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x7ndv\" (UniqueName: \"kubernetes.io/projected/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-kube-api-access-x7ndv\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976021 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaf16ae-d4df-42da-a1b5-03495d1ef713-serving-cert\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976052 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-machine-approver-tls\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976061 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcbaa597-5b18-4219-b757-5f10e86a2c1c-serviceca\") pod \"image-pruner-29499840-njc6g\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976078 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-config\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976100 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/688cb527-1d6f-4e22-9b14-4718201c8343-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976138 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-image-import-ca\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976420 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-client-ca\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.977081 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-config\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.977746 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-config\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.990067 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.994369 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaf16ae-d4df-42da-a1b5-03495d1ef713-serving-cert\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.996912 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfk4d\" (UniqueName: \"kubernetes.io/projected/c6bb9533-ef42-4cf1-92de-3a011b1934b8-kube-api-access-tfk4d\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.999061 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-machine-approver-tls\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.002756 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l8sn\" (UniqueName: \"kubernetes.io/projected/dcbaa597-5b18-4219-b757-5f10e86a2c1c-kube-api-access-2l8sn\") pod \"image-pruner-29499840-njc6g\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.004296 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6bb9533-ef42-4cf1-92de-3a011b1934b8-serving-cert\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.008486 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-572g4\" (UniqueName: \"kubernetes.io/projected/ebaf16ae-d4df-42da-a1b5-03495d1ef713-kube-api-access-572g4\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.008988 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7ndv\" (UniqueName: \"kubernetes.io/projected/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-kube-api-access-x7ndv\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.036342 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-9pw49"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.036514 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.038868 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.038930 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.039080 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.038874 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.039655 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.042499 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.042566 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.042660 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.051333 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.052047 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.052675 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.053100 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.053306 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.053733 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.053868 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.053920 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.053834 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.054105 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.054468 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.057055 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.057364 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.060724 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.061651 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.061798 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.072884 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.073182 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077155 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-config\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077195 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/688cb527-1d6f-4e22-9b14-4718201c8343-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077249 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-image-import-ca\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077333 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-encryption-config\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077356 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jv7\" (UniqueName: \"kubernetes.io/projected/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-kube-api-access-d6jv7\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077375 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d7088c96-1022-40ff-a06c-f6c299744e3a-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vbckt\" (UID: \"d7088c96-1022-40ff-a06c-f6c299744e3a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077396 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7ffn\" (UniqueName: \"kubernetes.io/projected/d7088c96-1022-40ff-a06c-f6c299744e3a-kube-api-access-m7ffn\") pod \"cluster-samples-operator-6b564684c8-vbckt\" (UID: \"d7088c96-1022-40ff-a06c-f6c299744e3a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077416 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-audit\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077439 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077458 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077477 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbf5j\" (UniqueName: \"kubernetes.io/projected/8490096f-f230-4160-bb09-338c9fa9f7ca-kube-api-access-gbf5j\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077496 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29780476-3e92-4559-af84-e97ab8689687-config\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077712 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-audit-policies\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077916 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-serving-cert\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078814 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8490096f-f230-4160-bb09-338c9fa9f7ca-node-pullsecrets\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078836 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-audit-dir\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078857 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-serving-cert\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078880 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078914 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-etcd-client\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078933 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-encryption-config\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078954 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/688cb527-1d6f-4e22-9b14-4718201c8343-images\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078977 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-etcd-client\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.079003 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-etcd-serving-ca\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.079023 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-49sqd\" (UniqueName: \"kubernetes.io/projected/688cb527-1d6f-4e22-9b14-4718201c8343-kube-api-access-49sqd\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.079043 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29780476-3e92-4559-af84-e97ab8689687-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.079065 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8490096f-f230-4160-bb09-338c9fa9f7ca-audit-dir\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.079085 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rnxh\" (UniqueName: \"kubernetes.io/projected/29780476-3e92-4559-af84-e97ab8689687-kube-api-access-8rnxh\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.079107 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688cb527-1d6f-4e22-9b14-4718201c8343-config\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078522 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.080093 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-image-import-ca\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078541 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-audit\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.080285 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8490096f-f230-4160-bb09-338c9fa9f7ca-node-pullsecrets\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.081100 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.081170 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8490096f-f230-4160-bb09-338c9fa9f7ca-audit-dir\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.081363 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688cb527-1d6f-4e22-9b14-4718201c8343-config\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.083471 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-config\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.084924 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-serving-cert\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.087411 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-encryption-config\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.087615 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-etcd-client\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.087900 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/688cb527-1d6f-4e22-9b14-4718201c8343-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.090147 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/688cb527-1d6f-4e22-9b14-4718201c8343-images\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.097610 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbf5j\" (UniqueName: \"kubernetes.io/projected/8490096f-f230-4160-bb09-338c9fa9f7ca-kube-api-access-gbf5j\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.097869 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.097961 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-49sqd\" (UniqueName: \"kubernetes.io/projected/688cb527-1d6f-4e22-9b14-4718201c8343-kube-api-access-49sqd\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.098005 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.100564 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.100799 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.100905 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.101177 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.122883 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-4lq2m"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.123082 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.125730 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.127739 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.127786 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.128052 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.128087 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.128653 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-znc99"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.128799 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.129542 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.130002 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.131031 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.131048 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.131613 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.131640 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.131918 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-x5pzk"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.132150 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.132260 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.132269 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.140591 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.142986 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-cvtnf"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.143914 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.145795 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.150185 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.150806 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.162735 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.162889 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.171284 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.176066 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180267 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180540 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180717 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8rnxh\" (UniqueName: \"kubernetes.io/projected/29780476-3e92-4559-af84-e97ab8689687-kube-api-access-8rnxh\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180762 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d2203371-fbdd-4110-9b33-39f278fbaa0d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180796 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180827 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180861 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-config\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180892 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2203371-fbdd-4110-9b33-39f278fbaa0d-serving-cert\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180926 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79d485c3-4de5-4d03-adf4-56f546c56674-serving-cert\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180957 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180987 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2203371-fbdd-4110-9b33-39f278fbaa0d-config\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181025 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181057 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dace4fd5-2d12-4c11-8252-9ac7426f870b-serving-cert\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181198 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-config\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181375 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-encryption-config\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181418 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d6jv7\" (UniqueName: \"kubernetes.io/projected/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-kube-api-access-d6jv7\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181446 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181478 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d7088c96-1022-40ff-a06c-f6c299744e3a-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vbckt\" (UID: \"d7088c96-1022-40ff-a06c-f6c299744e3a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181784 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7ffn\" (UniqueName: \"kubernetes.io/projected/d7088c96-1022-40ff-a06c-f6c299744e3a-kube-api-access-m7ffn\") pod \"cluster-samples-operator-6b564684c8-vbckt\" (UID: \"d7088c96-1022-40ff-a06c-f6c299744e3a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181955 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-oauth-config\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182047 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182443 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-trusted-ca-bundle\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182498 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ql89\" (UniqueName: \"kubernetes.io/projected/79d485c3-4de5-4d03-adf4-56f546c56674-kube-api-access-7ql89\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182651 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29780476-3e92-4559-af84-e97ab8689687-config\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182722 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182776 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r4p9\" (UniqueName: \"kubernetes.io/projected/dace4fd5-2d12-4c11-8252-9ac7426f870b-kube-api-access-4r4p9\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182825 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-audit-policies\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182864 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-serving-cert\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182905 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2203371-fbdd-4110-9b33-39f278fbaa0d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182929 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182953 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dace4fd5-2d12-4c11-8252-9ac7426f870b-trusted-ca\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182981 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-serving-cert\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183024 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-audit-dir\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183069 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183101 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183148 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-audit-dir\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183169 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw5ss\" (UniqueName: \"kubernetes.io/projected/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-kube-api-access-fw5ss\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183330 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29780476-3e92-4559-af84-e97ab8689687-config\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183344 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-etcd-client\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183412 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183522 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183551 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-audit-policies\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183576 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-service-ca\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183615 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-oauth-serving-cert\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.184999 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gz45\" (UniqueName: \"kubernetes.io/projected/03927a55-b629-4f9c-be0f-3499aba5b90e-kube-api-access-8gz45\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185041 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185067 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185087 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dace4fd5-2d12-4c11-8252-9ac7426f870b-config\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185108 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185171 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-etcd-serving-ca\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185192 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scqbk\" (UniqueName: \"kubernetes.io/projected/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-kube-api-access-scqbk\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185211 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-dir\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185250 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185269 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185297 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-policies\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185344 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29780476-3e92-4559-af84-e97ab8689687-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185362 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185385 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-tmp\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185407 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.186108 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-etcd-serving-ca\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.188604 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-encryption-config\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.189206 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d7088c96-1022-40ff-a06c-f6c299744e3a-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vbckt\" (UID: \"d7088c96-1022-40ff-a06c-f6c299744e3a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.190441 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29780476-3e92-4559-af84-e97ab8689687-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.190474 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.192160 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-serving-cert\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.192639 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-etcd-client\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.195892 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.210157 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.212813 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: W0202 00:12:00.213299 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2e75fc_5a21_4f73_8f4c_050eb27c0601.slice/crio-58d1c4eb8712d64eccd81d5392605e13a13a3e2931e93bcc65d91e388b08dea1 WatchSource:0}: Error finding container 58d1c4eb8712d64eccd81d5392605e13a13a3e2931e93bcc65d91e388b08dea1: Status 404 returned error can't find the container with id 58d1c4eb8712d64eccd81d5392605e13a13a3e2931e93bcc65d91e388b08dea1 Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.224023 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mjr86"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.224316 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.241681 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.250785 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.271630 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.276017 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.276161 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.276335 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286367 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-config\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286409 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdfp9\" (UniqueName: \"kubernetes.io/projected/2b96d2a0-be27-428e-8bfd-f78a09feb756-kube-api-access-rdfp9\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286443 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286461 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-oauth-config\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286653 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-trusted-ca-bundle\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286691 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7ql89\" (UniqueName: \"kubernetes.io/projected/79d485c3-4de5-4d03-adf4-56f546c56674-kube-api-access-7ql89\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286727 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286757 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4r4p9\" (UniqueName: \"kubernetes.io/projected/dace4fd5-2d12-4c11-8252-9ac7426f870b-kube-api-access-4r4p9\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286782 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/74feb297-18d1-4e3a-b077-779e202c89da-tmp-dir\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286811 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2203371-fbdd-4110-9b33-39f278fbaa0d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286830 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dace4fd5-2d12-4c11-8252-9ac7426f870b-trusted-ca\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286849 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-serving-cert\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286882 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286900 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286938 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fw5ss\" (UniqueName: \"kubernetes.io/projected/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-kube-api-access-fw5ss\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286964 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286991 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287011 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-service-ca\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287026 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-oauth-serving-cert\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287048 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8gz45\" (UniqueName: \"kubernetes.io/projected/03927a55-b629-4f9c-be0f-3499aba5b90e-kube-api-access-8gz45\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287071 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr7cr\" (UniqueName: \"kubernetes.io/projected/64332d15-ee3f-4864-9165-3217a06b24c2-kube-api-access-hr7cr\") pod \"migrator-866fcbc849-m7wqk\" (UID: \"64332d15-ee3f-4864-9165-3217a06b24c2\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287093 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287110 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287114 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-config\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287820 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-service-ca\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288039 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-trusted-ca-bundle\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288748 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dace4fd5-2d12-4c11-8252-9ac7426f870b-config\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288784 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288817 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59650315-e011-493f-bbf9-c20555ea6025-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288863 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-scqbk\" (UniqueName: \"kubernetes.io/projected/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-kube-api-access-scqbk\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288881 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-dir\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288898 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288919 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288939 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74feb297-18d1-4e3a-b077-779e202c89da-metrics-tls\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288954 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/59650315-e011-493f-bbf9-c20555ea6025-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288977 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-policies\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288992 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2b96d2a0-be27-428e-8bfd-f78a09feb756-available-featuregates\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289015 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289038 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-tmp\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289057 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289083 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d2203371-fbdd-4110-9b33-39f278fbaa0d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289098 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289116 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289134 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-config\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289157 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2203371-fbdd-4110-9b33-39f278fbaa0d-serving-cert\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289174 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b96d2a0-be27-428e-8bfd-f78a09feb756-serving-cert\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289196 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59650315-e011-493f-bbf9-c20555ea6025-config\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289222 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79d485c3-4de5-4d03-adf4-56f546c56674-serving-cert\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289254 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59650315-e011-493f-bbf9-c20555ea6025-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289275 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289293 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2203371-fbdd-4110-9b33-39f278fbaa0d-config\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289315 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289332 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85k8\" (UniqueName: \"kubernetes.io/projected/74feb297-18d1-4e3a-b077-779e202c89da-kube-api-access-q85k8\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289373 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dace4fd5-2d12-4c11-8252-9ac7426f870b-serving-cert\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289920 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.290087 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-dir\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.290499 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.290916 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d2203371-fbdd-4110-9b33-39f278fbaa0d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291088 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291271 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-policies\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291408 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-oauth-serving-cert\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291635 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291806 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2203371-fbdd-4110-9b33-39f278fbaa0d-config\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291852 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291961 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-tmp\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.292073 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.292506 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.292607 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.293494 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-config\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.294175 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.294202 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79d485c3-4de5-4d03-adf4-56f546c56674-serving-cert\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.294329 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.295766 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-serving-cert\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.297337 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.297842 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.298128 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.299495 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.300747 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.300942 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.301244 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2203371-fbdd-4110-9b33-39f278fbaa0d-serving-cert\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.301539 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.301553 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.302867 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.303545 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-oauth-config\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.330955 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.343168 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-cp5z2"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.343610 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.352243 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.371099 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.384149 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.384359 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.384870 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dace4fd5-2d12-4c11-8252-9ac7426f870b-serving-cert\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390457 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74feb297-18d1-4e3a-b077-779e202c89da-metrics-tls\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390491 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/59650315-e011-493f-bbf9-c20555ea6025-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390514 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2b96d2a0-be27-428e-8bfd-f78a09feb756-available-featuregates\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390557 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nghp5\" (UniqueName: \"kubernetes.io/projected/e1b2e108-2c25-4942-b6bb-9bd186134bc9-kube-api-access-nghp5\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390584 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b96d2a0-be27-428e-8bfd-f78a09feb756-serving-cert\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390603 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59650315-e011-493f-bbf9-c20555ea6025-config\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390625 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59650315-e011-493f-bbf9-c20555ea6025-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390647 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q85k8\" (UniqueName: \"kubernetes.io/projected/74feb297-18d1-4e3a-b077-779e202c89da-kube-api-access-q85k8\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390691 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rdfp9\" (UniqueName: \"kubernetes.io/projected/2b96d2a0-be27-428e-8bfd-f78a09feb756-kube-api-access-rdfp9\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390732 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/74feb297-18d1-4e3a-b077-779e202c89da-tmp-dir\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390776 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hr7cr\" (UniqueName: \"kubernetes.io/projected/64332d15-ee3f-4864-9165-3217a06b24c2-kube-api-access-hr7cr\") pod \"migrator-866fcbc849-m7wqk\" (UID: \"64332d15-ee3f-4864-9165-3217a06b24c2\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390798 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1b2e108-2c25-4942-b6bb-9bd186134bc9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390830 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59650315-e011-493f-bbf9-c20555ea6025-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390847 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1b2e108-2c25-4942-b6bb-9bd186134bc9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.391387 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/59650315-e011-493f-bbf9-c20555ea6025-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.391640 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2b96d2a0-be27-428e-8bfd-f78a09feb756-available-featuregates\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.392115 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/74feb297-18d1-4e3a-b077-779e202c89da-tmp-dir\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.399291 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.404689 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.411976 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.413096 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dace4fd5-2d12-4c11-8252-9ac7426f870b-trusted-ca\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.431320 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.440782 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dace4fd5-2d12-4c11-8252-9ac7426f870b-config\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.449935 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.470880 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: W0202 00:12:00.476939 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8490096f_f230_4160_bb09_338c9fa9f7ca.slice/crio-3dff47fd5622d76f9094ff593a6f9990ca9a7fc81f935d62943a1d2bd6f8491f WatchSource:0}: Error finding container 3dff47fd5622d76f9094ff593a6f9990ca9a7fc81f935d62943a1d2bd6f8491f: Status 404 returned error can't find the container with id 3dff47fd5622d76f9094ff593a6f9990ca9a7fc81f935d62943a1d2bd6f8491f Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.490855 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.492138 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1b2e108-2c25-4942-b6bb-9bd186134bc9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.492191 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1b2e108-2c25-4942-b6bb-9bd186134bc9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.492264 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nghp5\" (UniqueName: \"kubernetes.io/projected/e1b2e108-2c25-4942-b6bb-9bd186134bc9-kube-api-access-nghp5\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.495005 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1b2e108-2c25-4942-b6bb-9bd186134bc9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.501645 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74feb297-18d1-4e3a-b077-779e202c89da-metrics-tls\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.514736 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.530378 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.550678 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.570557 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.571661 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.571699 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.571855 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.576943 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b96d2a0-be27-428e-8bfd-f78a09feb756-serving-cert\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.582943 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.590589 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.610026 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.619378 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59650315-e011-493f-bbf9-c20555ea6025-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.625738 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" event={"ID":"8490096f-f230-4160-bb09-338c9fa9f7ca","Type":"ContainerStarted","Data":"3dff47fd5622d76f9094ff593a6f9990ca9a7fc81f935d62943a1d2bd6f8491f"} Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.625783 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.626026 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.631049 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.666703 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rnxh\" (UniqueName: \"kubernetes.io/projected/29780476-3e92-4559-af84-e97ab8689687-kube-api-access-8rnxh\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.669777 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.672931 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59650315-e011-493f-bbf9-c20555ea6025-config\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.691281 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.727411 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6jv7\" (UniqueName: \"kubernetes.io/projected/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-kube-api-access-d6jv7\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.744497 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7ffn\" (UniqueName: \"kubernetes.io/projected/d7088c96-1022-40ff-a06c-f6c299744e3a-kube-api-access-m7ffn\") pod \"cluster-samples-operator-6b564684c8-vbckt\" (UID: \"d7088c96-1022-40ff-a06c-f6c299744e3a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.750180 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.771052 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.790477 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.811499 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.830513 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.850034 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.867695 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.892354 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" event={"ID":"1f2e75fc-5a21-4f73-8f4c-050eb27c0601","Type":"ContainerStarted","Data":"58d1c4eb8712d64eccd81d5392605e13a13a3e2931e93bcc65d91e388b08dea1"} Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.892407 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29499840-njc6g" event={"ID":"dcbaa597-5b18-4219-b757-5f10e86a2c1c","Type":"ContainerStarted","Data":"ab1dda4ca19e44a7d7547556112d79c7a9164fc1db4386291660d7d4020c24e9"} Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.892425 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" event={"ID":"ebaf16ae-d4df-42da-a1b5-03495d1ef713","Type":"ContainerStarted","Data":"3158eaa8cced5445a37b12560efe834d0b215f5c202cf0145f728d9c8aaa5068"} Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.892443 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.893491 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.906857 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.939116 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ql89\" (UniqueName: \"kubernetes.io/projected/79d485c3-4de5-4d03-adf4-56f546c56674-kube-api-access-7ql89\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.953289 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw5ss\" (UniqueName: \"kubernetes.io/projected/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-kube-api-access-fw5ss\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.957089 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2203371-fbdd-4110-9b33-39f278fbaa0d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.960889 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.976679 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.985179 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.993679 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r4p9\" (UniqueName: \"kubernetes.io/projected/dace4fd5-2d12-4c11-8252-9ac7426f870b-kube-api-access-4r4p9\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.002198 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-scqbk\" (UniqueName: \"kubernetes.io/projected/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-kube-api-access-scqbk\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.007985 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gz45\" (UniqueName: \"kubernetes.io/projected/03927a55-b629-4f9c-be0f-3499aba5b90e-kube-api-access-8gz45\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.009701 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.014936 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.032978 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.050294 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.054096 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1b2e108-2c25-4942-b6bb-9bd186134bc9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.057392 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.065030 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.070986 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.077442 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.078192 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.090239 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.113747 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.152071 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.192182 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" event={"ID":"c6bb9533-ef42-4cf1-92de-3a011b1934b8","Type":"ContainerStarted","Data":"683d5e48d4bbd76223bfa55ebb9faedf8bd6693391a55afaa0790e34cd786995"} Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.192516 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.192747 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" event={"ID":"688cb527-1d6f-4e22-9b14-4718201c8343","Type":"ContainerStarted","Data":"1e9e5b2cca3ab853d62ce694bb95e422521c70191082faebdc45c803fbfe5db5"} Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.195046 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-4zf25"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.200334 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q85k8\" (UniqueName: \"kubernetes.io/projected/74feb297-18d1-4e3a-b077-779e202c89da-kube-api-access-q85k8\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.216260 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdfp9\" (UniqueName: \"kubernetes.io/projected/2b96d2a0-be27-428e-8bfd-f78a09feb756-kube-api-access-rdfp9\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.245569 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59650315-e011-493f-bbf9-c20555ea6025-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.256895 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr7cr\" (UniqueName: \"kubernetes.io/projected/64332d15-ee3f-4864-9165-3217a06b24c2-kube-api-access-hr7cr\") pod \"migrator-866fcbc849-m7wqk\" (UID: \"64332d15-ee3f-4864-9165-3217a06b24c2\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.264782 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmvtw"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.270340 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.287661 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nghp5\" (UniqueName: \"kubernetes.io/projected/e1b2e108-2c25-4942-b6bb-9bd186134bc9-kube-api-access-nghp5\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.293682 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.293698 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.314571 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.330812 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.334477 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.335593 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.335772 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.351153 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.369627 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.373421 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.379002 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.391259 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.397178 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:01 crc kubenswrapper[5108]: W0202 00:12:01.406168 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03927a55_b629_4f9c_be0f_3499aba5b90e.slice/crio-ab4178c0f93978aa03540a620121f5f5624450b66655822381ed4a7581fad072 WatchSource:0}: Error finding container ab4178c0f93978aa03540a620121f5f5624450b66655822381ed4a7581fad072: Status 404 returned error can't find the container with id ab4178c0f93978aa03540a620121f5f5624450b66655822381ed4a7581fad072 Feb 02 00:12:01 crc kubenswrapper[5108]: W0202 00:12:01.407140 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79d485c3_4de5_4d03_adf4_56f546c56674.slice/crio-8a32a9cf40ec32feb5189d85552d666773264beabb1d0306431885517df2ea20 WatchSource:0}: Error finding container 8a32a9cf40ec32feb5189d85552d666773264beabb1d0306431885517df2ea20: Status 404 returned error can't find the container with id 8a32a9cf40ec32feb5189d85552d666773264beabb1d0306431885517df2ea20 Feb 02 00:12:01 crc kubenswrapper[5108]: W0202 00:12:01.409776 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2203371_fbdd_4110_9b33_39f278fbaa0d.slice/crio-92f32b0fea3f83c877881cb678270e63baadc1131f9dd75326383f6a1362b01d WatchSource:0}: Error finding container 92f32b0fea3f83c877881cb678270e63baadc1131f9dd75326383f6a1362b01d: Status 404 returned error can't find the container with id 92f32b0fea3f83c877881cb678270e63baadc1131f9dd75326383f6a1362b01d Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.410987 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.431482 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.450035 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 02 00:12:01 crc kubenswrapper[5108]: W0202 00:12:01.464153 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddace4fd5_2d12_4c11_8252_9ac7426f870b.slice/crio-1d3f143097cfef2a1c6969b8cbb8abd202a99ba479f6984b71259a6306ade522 WatchSource:0}: Error finding container 1d3f143097cfef2a1c6969b8cbb8abd202a99ba479f6984b71259a6306ade522: Status 404 returned error can't find the container with id 1d3f143097cfef2a1c6969b8cbb8abd202a99ba479f6984b71259a6306ade522 Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.471105 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.492866 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.524815 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.533012 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-wb8mw"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.533614 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.544554 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.552331 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.576849 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.597691 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.611392 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.629111 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.629200 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.631875 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.636309 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.637332 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.637590 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.653673 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.692122 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.716663 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.730900 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.750923 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: W0202 00:12:01.758692 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b96d2a0_be27_428e_8bfd_f78a09feb756.slice/crio-c619e269574a614e62448d9cf83c047a7af481334875a4db06f4bbca0e0f66c9 WatchSource:0}: Error finding container c619e269574a614e62448d9cf83c047a7af481334875a4db06f4bbca0e0f66c9: Status 404 returned error can't find the container with id c619e269574a614e62448d9cf83c047a7af481334875a4db06f4bbca0e0f66c9 Feb 02 00:12:01 crc kubenswrapper[5108]: W0202 00:12:01.783697 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74feb297_18d1_4e3a_b077_779e202c89da.slice/crio-e96f8487c83ebffa4028aeab0a1061c0237488349f54c375ff6e0f49b7bf4245 WatchSource:0}: Error finding container e96f8487c83ebffa4028aeab0a1061c0237488349f54c375ff6e0f49b7bf4245: Status 404 returned error can't find the container with id e96f8487c83ebffa4028aeab0a1061c0237488349f54c375ff6e0f49b7bf4245 Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.794809 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.811430 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.831900 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swg6f\" (UniqueName: \"kubernetes.io/projected/07d89198-8b8e-4edc-96b8-05b6df5194f6-kube-api-access-swg6f\") pod \"downloads-747b44746d-cp5z2\" (UID: \"07d89198-8b8e-4edc-96b8-05b6df5194f6\") " pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832012 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-images\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832047 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ba194a-1171-4ed4-a843-0c39ac61d268-ca-trust-extracted\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832067 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832135 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832161 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-bound-sa-token\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832202 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-tls\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832436 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-certificates\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832581 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832613 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832654 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-trusted-ca\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832690 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqbvn\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-kube-api-access-sqbvn\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832865 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqz7x\" (UniqueName: \"kubernetes.io/projected/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-kube-api-access-jqz7x\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832939 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ba194a-1171-4ed4-a843-0c39ac61d268-installation-pull-secrets\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: E0202 00:12:01.834551 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.33452943 +0000 UTC m=+121.610026360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.851273 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.871410 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.891113 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.910036 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" event={"ID":"ebaf16ae-d4df-42da-a1b5-03495d1ef713","Type":"ContainerStarted","Data":"675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686"} Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.910106 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.910318 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.913826 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.913921 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.916341 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.931623 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.933909 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934148 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8285a46b-171e-4c8c-ba54-5ab062df76fc-secret-volume\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934185 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sqbvn\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-kube-api-access-sqbvn\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934208 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde8d9df-2e55-498d-acbe-7b5396cac5a7-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934245 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde8d9df-2e55-498d-acbe-7b5396cac5a7-config\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934275 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/525b7b06-ae33-4a3b-bf12-139bff69a17c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934296 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/99916b4a-423b-4db6-a912-cc2ef585eab3-webhook-certs\") pod \"multus-admission-controller-69db94689b-wb8mw\" (UID: \"99916b4a-423b-4db6-a912-cc2ef585eab3\") " pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:01 crc kubenswrapper[5108]: E0202 00:12:01.934329 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.434294931 +0000 UTC m=+121.709791881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934384 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nz59\" (UniqueName: \"kubernetes.io/projected/00c9b96f-70c1-47b2-ab2f-570c9911ecaf-kube-api-access-4nz59\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qmhlw\" (UID: \"00c9b96f-70c1-47b2-ab2f-570c9911ecaf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934430 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jqz7x\" (UniqueName: \"kubernetes.io/projected/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-kube-api-access-jqz7x\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934486 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-swg6f\" (UniqueName: \"kubernetes.io/projected/07d89198-8b8e-4edc-96b8-05b6df5194f6-kube-api-access-swg6f\") pod \"downloads-747b44746d-cp5z2\" (UID: \"07d89198-8b8e-4edc-96b8-05b6df5194f6\") " pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934526 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b79d203-f1c7-4523-9d97-51181cdb26d2-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934575 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c22e3c9-f940-436c-bcd4-0ae77d143061-config\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934614 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c7nm\" (UniqueName: \"kubernetes.io/projected/4c22e3c9-f940-436c-bcd4-0ae77d143061-kube-api-access-5c7nm\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934655 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ba194a-1171-4ed4-a843-0c39ac61d268-installation-pull-secrets\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934696 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934730 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-metrics-certs\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934756 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-images\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934782 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f7kc\" (UniqueName: \"kubernetes.io/projected/7f60e56b-3881-49ee-be41-5435327c1be3-kube-api-access-9f7kc\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934814 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-bound-sa-token\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934844 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00c9b96f-70c1-47b2-ab2f-570c9911ecaf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qmhlw\" (UID: \"00c9b96f-70c1-47b2-ab2f-570c9911ecaf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934873 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ba194a-1171-4ed4-a843-0c39ac61d268-ca-trust-extracted\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934897 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934925 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934956 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-tls\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934984 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-ca\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935016 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c22e3c9-f940-436c-bcd4-0ae77d143061-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935092 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b79d203-f1c7-4523-9d97-51181cdb26d2-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935136 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/45594040-ee30-4578-aa8c-a9e8ef858c06-tmp-dir\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935166 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9tm\" (UniqueName: \"kubernetes.io/projected/9b79d203-f1c7-4523-9d97-51181cdb26d2-kube-api-access-hm9tm\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935195 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjmvq\" (UniqueName: \"kubernetes.io/projected/fde8d9df-2e55-498d-acbe-7b5396cac5a7-kube-api-access-qjmvq\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935244 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-trusted-ca\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935270 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-stats-auth\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935294 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c22e3c9-f940-436c-bcd4-0ae77d143061-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935322 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8285a46b-171e-4c8c-ba54-5ab062df76fc-config-volume\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935350 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525b7b06-ae33-4a3b-bf12-139bff69a17c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935391 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-client\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935426 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5zr6\" (UniqueName: \"kubernetes.io/projected/99916b4a-423b-4db6-a912-cc2ef585eab3-kube-api-access-z5zr6\") pod \"multus-admission-controller-69db94689b-wb8mw\" (UID: \"99916b4a-423b-4db6-a912-cc2ef585eab3\") " pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935458 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnj69\" (UniqueName: \"kubernetes.io/projected/45594040-ee30-4578-aa8c-a9e8ef858c06-kube-api-access-lnj69\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935481 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-default-certificate\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935568 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45594040-ee30-4578-aa8c-a9e8ef858c06-serving-cert\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935611 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/525b7b06-ae33-4a3b-bf12-139bff69a17c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935645 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwdjm\" (UniqueName: \"kubernetes.io/projected/031f8213-ba02-4add-9d14-c3a995a10fa9-kube-api-access-bwdjm\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935685 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935710 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-config\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935745 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/031f8213-ba02-4add-9d14-c3a995a10fa9-service-ca-bundle\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935784 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-service-ca\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935820 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f60e56b-3881-49ee-be41-5435327c1be3-tmp\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935846 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcnnp\" (UniqueName: \"kubernetes.io/projected/8285a46b-171e-4c8c-ba54-5ab062df76fc-kube-api-access-xcnnp\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935911 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-certificates\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935940 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b79d203-f1c7-4523-9d97-51181cdb26d2-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935964 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/525b7b06-ae33-4a3b-bf12-139bff69a17c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.936015 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ba194a-1171-4ed4-a843-0c39ac61d268-ca-trust-extracted\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.936916 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.937918 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-trusted-ca\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.938220 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-certificates\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.950110 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.969676 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.990130 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.010723 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.030796 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037550 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45594040-ee30-4578-aa8c-a9e8ef858c06-serving-cert\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037618 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/525b7b06-ae33-4a3b-bf12-139bff69a17c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037660 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwdjm\" (UniqueName: \"kubernetes.io/projected/031f8213-ba02-4add-9d14-c3a995a10fa9-kube-api-access-bwdjm\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037685 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-config\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037704 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/031f8213-ba02-4add-9d14-c3a995a10fa9-service-ca-bundle\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037739 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-service-ca\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037757 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f60e56b-3881-49ee-be41-5435327c1be3-tmp\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037781 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xcnnp\" (UniqueName: \"kubernetes.io/projected/8285a46b-171e-4c8c-ba54-5ab062df76fc-kube-api-access-xcnnp\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038045 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b79d203-f1c7-4523-9d97-51181cdb26d2-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038103 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/525b7b06-ae33-4a3b-bf12-139bff69a17c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038180 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/97af9c02-0ff8-4146-9313-f3ecc17e1faa-profile-collector-cert\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038289 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038352 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8285a46b-171e-4c8c-ba54-5ab062df76fc-secret-volume\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038399 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/97af9c02-0ff8-4146-9313-f3ecc17e1faa-srv-cert\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038524 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde8d9df-2e55-498d-acbe-7b5396cac5a7-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038584 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/525b7b06-ae33-4a3b-bf12-139bff69a17c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038620 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde8d9df-2e55-498d-acbe-7b5396cac5a7-config\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038696 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f60e56b-3881-49ee-be41-5435327c1be3-tmp\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.038923 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.538900782 +0000 UTC m=+121.814397942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039446 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/525b7b06-ae33-4a3b-bf12-139bff69a17c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039505 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/99916b4a-423b-4db6-a912-cc2ef585eab3-webhook-certs\") pod \"multus-admission-controller-69db94689b-wb8mw\" (UID: \"99916b4a-423b-4db6-a912-cc2ef585eab3\") " pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039556 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4nz59\" (UniqueName: \"kubernetes.io/projected/00c9b96f-70c1-47b2-ab2f-570c9911ecaf-kube-api-access-4nz59\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qmhlw\" (UID: \"00c9b96f-70c1-47b2-ab2f-570c9911ecaf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039621 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b79d203-f1c7-4523-9d97-51181cdb26d2-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039680 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c22e3c9-f940-436c-bcd4-0ae77d143061-config\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039722 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5c7nm\" (UniqueName: \"kubernetes.io/projected/4c22e3c9-f940-436c-bcd4-0ae77d143061-kube-api-access-5c7nm\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039824 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039893 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-metrics-certs\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039938 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9f7kc\" (UniqueName: \"kubernetes.io/projected/7f60e56b-3881-49ee-be41-5435327c1be3-kube-api-access-9f7kc\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040011 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00c9b96f-70c1-47b2-ab2f-570c9911ecaf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qmhlw\" (UID: \"00c9b96f-70c1-47b2-ab2f-570c9911ecaf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040104 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040180 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-ca\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040250 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c22e3c9-f940-436c-bcd4-0ae77d143061-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040294 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97af9c02-0ff8-4146-9313-f3ecc17e1faa-tmpfs\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040360 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/525b7b06-ae33-4a3b-bf12-139bff69a17c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040470 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c22e3c9-f940-436c-bcd4-0ae77d143061-config\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041003 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde8d9df-2e55-498d-acbe-7b5396cac5a7-config\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041206 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hdgn\" (UniqueName: \"kubernetes.io/projected/97af9c02-0ff8-4146-9313-f3ecc17e1faa-kube-api-access-8hdgn\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041361 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b79d203-f1c7-4523-9d97-51181cdb26d2-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041430 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/45594040-ee30-4578-aa8c-a9e8ef858c06-tmp-dir\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041466 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hm9tm\" (UniqueName: \"kubernetes.io/projected/9b79d203-f1c7-4523-9d97-51181cdb26d2-kube-api-access-hm9tm\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041550 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c22e3c9-f940-436c-bcd4-0ae77d143061-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041556 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qjmvq\" (UniqueName: \"kubernetes.io/projected/fde8d9df-2e55-498d-acbe-7b5396cac5a7-kube-api-access-qjmvq\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041643 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-stats-auth\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041748 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c22e3c9-f940-436c-bcd4-0ae77d143061-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041802 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8285a46b-171e-4c8c-ba54-5ab062df76fc-config-volume\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041879 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525b7b06-ae33-4a3b-bf12-139bff69a17c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041962 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-client\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.042029 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z5zr6\" (UniqueName: \"kubernetes.io/projected/99916b4a-423b-4db6-a912-cc2ef585eab3-kube-api-access-z5zr6\") pod \"multus-admission-controller-69db94689b-wb8mw\" (UID: \"99916b4a-423b-4db6-a912-cc2ef585eab3\") " pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.042088 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lnj69\" (UniqueName: \"kubernetes.io/projected/45594040-ee30-4578-aa8c-a9e8ef858c06-kube-api-access-lnj69\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.042131 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-default-certificate\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.042257 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/45594040-ee30-4578-aa8c-a9e8ef858c06-tmp-dir\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.042714 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b79d203-f1c7-4523-9d97-51181cdb26d2-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.045799 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b79d203-f1c7-4523-9d97-51181cdb26d2-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.046480 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde8d9df-2e55-498d-acbe-7b5396cac5a7-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.046566 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-default-certificate\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.047119 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-stats-auth\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.047915 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.050846 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.050985 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c22e3c9-f940-436c-bcd4-0ae77d143061-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.052379 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00c9b96f-70c1-47b2-ab2f-570c9911ecaf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qmhlw\" (UID: \"00c9b96f-70c1-47b2-ab2f-570c9911ecaf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.054373 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/525b7b06-ae33-4a3b-bf12-139bff69a17c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.058815 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/031f8213-ba02-4add-9d14-c3a995a10fa9-service-ca-bundle\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.070977 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.090318 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.106664 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-metrics-certs\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.120729 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.131777 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.132552 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.134027 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.136938 5108 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-xtqwv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.137092 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.146301 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.146581 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/97af9c02-0ff8-4146-9313-f3ecc17e1faa-profile-collector-cert\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.146652 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/97af9c02-0ff8-4146-9313-f3ecc17e1faa-srv-cert\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.146730 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.646691756 +0000 UTC m=+121.922188686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.146998 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97af9c02-0ff8-4146-9313-f3ecc17e1faa-tmpfs\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.147069 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8hdgn\" (UniqueName: \"kubernetes.io/projected/97af9c02-0ff8-4146-9313-f3ecc17e1faa-kube-api-access-8hdgn\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.148083 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97af9c02-0ff8-4146-9313-f3ecc17e1faa-tmpfs\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.150407 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.171185 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.184459 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/97af9c02-0ff8-4146-9313-f3ecc17e1faa-profile-collector-cert\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.184884 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8285a46b-171e-4c8c-ba54-5ab062df76fc-secret-volume\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.190749 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.211885 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.214157 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.214595 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.230634 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.233443 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8285a46b-171e-4c8c-ba54-5ab062df76fc-config-volume\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.248928 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.250971 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.750952987 +0000 UTC m=+122.026449917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.254146 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.256187 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.271014 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.286993 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/99916b4a-423b-4db6-a912-cc2ef585eab3-webhook-certs\") pod \"multus-admission-controller-69db94689b-wb8mw\" (UID: \"99916b4a-423b-4db6-a912-cc2ef585eab3\") " pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.290088 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.309410 5108 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-fc5pz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.309475 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.313146 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.320772 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.330200 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.332798 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.332844 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-4zcv5"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.350939 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.351077 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.851043977 +0000 UTC m=+122.126540907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.351668 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.351895 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-tmpfs\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.352053 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-webhook-cert\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.352126 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkv7s\" (UniqueName: \"kubernetes.io/projected/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-kube-api-access-pkv7s\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.352314 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.352357 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-apiservice-cert\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.353069 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.85303615 +0000 UTC m=+122.128533150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.365095 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-wbv6f"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.365190 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" event={"ID":"c6bb9533-ef42-4cf1-92de-3a011b1934b8","Type":"ContainerStarted","Data":"e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.365480 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" event={"ID":"8eb5f446-9d16-4ceb-9bb7-9424862cac0b","Type":"ContainerStarted","Data":"622cac008e6f344601da7814328d32bf4251e371ecb3f167f409d3931a5c0323"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.365536 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.365557 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.366799 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.376706 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.411823 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.436558 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.446348 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/97af9c02-0ff8-4146-9313-f3ecc17e1faa-srv-cert\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.450579 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.453741 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454071 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-tmpfs\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454101 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22jjj\" (UniqueName: \"kubernetes.io/projected/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-kube-api-access-22jjj\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454182 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-apiservice-cert\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454241 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-tmpfs\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454321 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454349 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdkfm\" (UniqueName: \"kubernetes.io/projected/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-kube-api-access-pdkfm\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.454892 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.954868317 +0000 UTC m=+122.230365247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.456410 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-tmpfs\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454474 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-signing-key\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.457526 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-webhook-cert\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.457623 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-srv-cert\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.457659 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-signing-cabundle\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.457705 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pkv7s\" (UniqueName: \"kubernetes.io/projected/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-kube-api-access-pkv7s\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.470154 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.513597 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqbvn\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-kube-api-access-sqbvn\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.526844 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqz7x\" (UniqueName: \"kubernetes.io/projected/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-kube-api-access-jqz7x\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:02 crc kubenswrapper[5108]: W0202 00:12:02.533068 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1b2e108_2c25_4942_b6bb_9bd186134bc9.slice/crio-b1eac1bace5f497c22016bfd4a514ab71202c44100f5732edf43602fd0921f57 WatchSource:0}: Error finding container b1eac1bace5f497c22016bfd4a514ab71202c44100f5732edf43602fd0921f57: Status 404 returned error can't find the container with id b1eac1bace5f497c22016bfd4a514ab71202c44100f5732edf43602fd0921f57 Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.547734 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" event={"ID":"688cb527-1d6f-4e22-9b14-4718201c8343","Type":"ContainerStarted","Data":"e012f07d508f60af46efab18b336a6bf44e36c3b7a37cecd5f8ff132f8f02b90"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.547797 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.547983 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.553745 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-swg6f\" (UniqueName: \"kubernetes.io/projected/07d89198-8b8e-4edc-96b8-05b6df5194f6-kube-api-access-swg6f\") pod \"downloads-747b44746d-cp5z2\" (UID: \"07d89198-8b8e-4edc-96b8-05b6df5194f6\") " pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.558848 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.558990 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pdkfm\" (UniqueName: \"kubernetes.io/projected/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-kube-api-access-pdkfm\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.559142 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-signing-key\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.559325 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-srv-cert\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.559422 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-signing-cabundle\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.559551 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-tmpfs\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.559641 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-22jjj\" (UniqueName: \"kubernetes.io/projected/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-kube-api-access-22jjj\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.559724 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.560097 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.060080453 +0000 UTC m=+122.335577383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.560988 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-tmpfs\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.562942 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.565886 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-bound-sa-token\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.571995 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.580954 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ba194a-1171-4ed4-a843-0c39ac61d268-installation-pull-secrets\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.590547 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.596160 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-images\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601043 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-znc99" event={"ID":"dace4fd5-2d12-4c11-8252-9ac7426f870b","Type":"ContainerStarted","Data":"1d3f143097cfef2a1c6969b8cbb8abd202a99ba479f6984b71259a6306ade522"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601101 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-9pw49"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601121 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601164 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" event={"ID":"29780476-3e92-4559-af84-e97ab8689687","Type":"ContainerStarted","Data":"edf2f5ae7b656f989a8d79219fb5cd964cf185d1dcb11ba1176c4c4a69ef2c39"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601178 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601193 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-q88tw"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601206 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hnl48"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.602665 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.610241 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.628746 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-tls\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.630962 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.644623 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.653857 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.664872 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.665120 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.165095013 +0000 UTC m=+122.440591943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.665554 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c1108f2-209c-4d4c-affc-fe8fbfd27cca-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-f55br\" (UID: \"2c1108f2-209c-4d4c-affc-fe8fbfd27cca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.666423 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gscz\" (UniqueName: \"kubernetes.io/projected/2c1108f2-209c-4d4c-affc-fe8fbfd27cca-kube-api-access-7gscz\") pod \"package-server-manager-77f986bd66-f55br\" (UID: \"2c1108f2-209c-4d4c-affc-fe8fbfd27cca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.666640 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.667042 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45594040-ee30-4578-aa8c-a9e8ef858c06-serving-cert\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.667167 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.167148908 +0000 UTC m=+122.442645838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.669636 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.678721 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-config\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.684921 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" event={"ID":"1f2e75fc-5a21-4f73-8f4c-050eb27c0601","Type":"ContainerStarted","Data":"5d866278182645a4b04b27cd412a4f630b1f2a02a19cbdf9183778c0f02dc03b"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.684965 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29499840-njc6g" event={"ID":"dcbaa597-5b18-4219-b757-5f10e86a2c1c","Type":"ContainerStarted","Data":"662689ee61fccec648a90a4375a519042cf1cb9c27ef807a261aa5cd1d207f99"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.684998 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" event={"ID":"79d485c3-4de5-4d03-adf4-56f546c56674","Type":"ContainerStarted","Data":"8a32a9cf40ec32feb5189d85552d666773264beabb1d0306431885517df2ea20"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.685018 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-824d7"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.685030 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.711970 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcnnp\" (UniqueName: \"kubernetes.io/projected/8285a46b-171e-4c8c-ba54-5ab062df76fc-kube-api-access-xcnnp\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.730834 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.731623 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwdjm\" (UniqueName: \"kubernetes.io/projected/031f8213-ba02-4add-9d14-c3a995a10fa9-kube-api-access-bwdjm\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.739427 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-service-ca\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.766798 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b79d203-f1c7-4523-9d97-51181cdb26d2-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.767211 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.767353 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.267328571 +0000 UTC m=+122.542825501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.767944 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7gscz\" (UniqueName: \"kubernetes.io/projected/2c1108f2-209c-4d4c-affc-fe8fbfd27cca-kube-api-access-7gscz\") pod \"package-server-manager-77f986bd66-f55br\" (UID: \"2c1108f2-209c-4d4c-affc-fe8fbfd27cca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768004 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mg25\" (UniqueName: \"kubernetes.io/projected/917a1c8b-59d5-4acb-8cef-91979326a7d1-kube-api-access-2mg25\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768027 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-plugins-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768087 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-socket-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768162 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-csi-data-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768252 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e88c0487-caa2-44ee-a139-33b289b9fc2d-serving-cert\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768478 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768696 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88c0487-caa2-44ee-a139-33b289b9fc2d-config\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768731 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-registration-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.768863 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.26882702 +0000 UTC m=+122.544323950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768906 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c1108f2-209c-4d4c-affc-fe8fbfd27cca-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-f55br\" (UID: \"2c1108f2-209c-4d4c-affc-fe8fbfd27cca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.769049 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-mountpoint-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.769128 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsdcw\" (UniqueName: \"kubernetes.io/projected/e88c0487-caa2-44ee-a139-33b289b9fc2d-kube-api-access-vsdcw\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.772297 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.772784 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.787009 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nz59\" (UniqueName: \"kubernetes.io/projected/00c9b96f-70c1-47b2-ab2f-570c9911ecaf-kube-api-access-4nz59\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qmhlw\" (UID: \"00c9b96f-70c1-47b2-ab2f-570c9911ecaf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.788620 5108 generic.go:358] "Generic (PLEG): container finished" podID="8490096f-f230-4160-bb09-338c9fa9f7ca" containerID="806cbf335f4c9122a98af00277e8275b9c9c56fd35ff77e9c13a5c09fad858b6" exitCode=0 Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.796010 5108 generic.go:358] "Generic (PLEG): container finished" podID="8eb5f446-9d16-4ceb-9bb7-9424862cac0b" containerID="4c6e7884627b6708f6b36fa0a5fd9c8c47024a9108bb856e7749da000b38a18d" exitCode=0 Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804592 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" event={"ID":"03927a55-b629-4f9c-be0f-3499aba5b90e","Type":"ContainerStarted","Data":"ab4178c0f93978aa03540a620121f5f5624450b66655822381ed4a7581fad072"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804659 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" event={"ID":"d2203371-fbdd-4110-9b33-39f278fbaa0d","Type":"ContainerStarted","Data":"92f32b0fea3f83c877881cb678270e63baadc1131f9dd75326383f6a1362b01d"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804680 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-znc99"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804711 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" event={"ID":"d7088c96-1022-40ff-a06c-f6c299744e3a","Type":"ContainerStarted","Data":"2ad20847710da3126f76cc87d6b9148544302a9e5e4ae90647a3e99524987c69"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804728 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fc5pz"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804747 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804760 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-cvtnf"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804775 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-9pw49" event={"ID":"6d992c02-f6cc-4488-9108-a72c6c2f3dcf","Type":"ContainerStarted","Data":"667462f5842f9336d060c680487d82e541368124a4626d982b5aaa54ddf6a9f0"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804792 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" event={"ID":"79d485c3-4de5-4d03-adf4-56f546c56674","Type":"ContainerStarted","Data":"9cd7b43085215338e0f3618f1075735e2c21684fc535b52c243eb7c5d342543a"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804809 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-cp5z2"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804829 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804844 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" event={"ID":"64332d15-ee3f-4864-9165-3217a06b24c2","Type":"ContainerStarted","Data":"8f2bc8a0b6e698f037e9383e20c6e4ee4f255ad3fc27bbd9bf4b9c0f9172e8f9"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804867 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-4lq2m"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804883 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" event={"ID":"d2203371-fbdd-4110-9b33-39f278fbaa0d","Type":"ContainerStarted","Data":"46d3e656b986b28a6c3ed6dd7019d7791902fca89c304d6aeaad28f1500fe047"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804896 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29499840-njc6g"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804911 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804930 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" event={"ID":"e1b2e108-2c25-4942-b6bb-9bd186134bc9","Type":"ContainerStarted","Data":"b1eac1bace5f497c22016bfd4a514ab71202c44100f5732edf43602fd0921f57"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804948 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804966 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804981 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" event={"ID":"59650315-e011-493f-bbf9-c20555ea6025","Type":"ContainerStarted","Data":"3ee763e0c64f20bee57b387b2a75d1c42b8796be4c633ce066b463b9e2251fcc"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804995 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mjr86"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805010 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" event={"ID":"8490096f-f230-4160-bb09-338c9fa9f7ca","Type":"ContainerDied","Data":"806cbf335f4c9122a98af00277e8275b9c9c56fd35ff77e9c13a5c09fad858b6"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805033 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" event={"ID":"d7088c96-1022-40ff-a06c-f6c299744e3a","Type":"ContainerStarted","Data":"930c0fc731362b74d47c1d69f55db286e0ea2297614d996d109a47a45e26cbeb"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805049 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805065 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" event={"ID":"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26","Type":"ContainerStarted","Data":"085eaeea2f3b71f73a742a92beea7fc7c5c168d52b65f8e21625d1f7a0060537"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805088 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-9pw49" event={"ID":"6d992c02-f6cc-4488-9108-a72c6c2f3dcf","Type":"ContainerStarted","Data":"961554ebe1f6274cf27a8fe1773f7ae08ab641c306f9331db0a1ce83fcb584c2"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805101 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" event={"ID":"8eb5f446-9d16-4ceb-9bb7-9424862cac0b","Type":"ContainerDied","Data":"4c6e7884627b6708f6b36fa0a5fd9c8c47024a9108bb856e7749da000b38a18d"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805119 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-znc99" event={"ID":"dace4fd5-2d12-4c11-8252-9ac7426f870b","Type":"ContainerStarted","Data":"9a1c9821ac905b46c5b43b356e28319733ccb1106d884e83b7c61377841bc40b"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805134 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805150 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" event={"ID":"29780476-3e92-4559-af84-e97ab8689687","Type":"ContainerStarted","Data":"813601af7cf995b6fb2d0282609c818f051a53b8b25c7f974a7794a72d578fb2"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805163 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-x5pzk"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805179 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805193 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-q9bzk"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805638 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.808845 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c7nm\" (UniqueName: \"kubernetes.io/projected/4c22e3c9-f940-436c-bcd4-0ae77d143061-kube-api-access-5c7nm\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.830181 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.831746 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f7kc\" (UniqueName: \"kubernetes.io/projected/7f60e56b-3881-49ee-be41-5435327c1be3-kube-api-access-9f7kc\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.832774 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-ca\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.869599 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.870094 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.870376 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.370341218 +0000 UTC m=+122.645838318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.870377 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm9tm\" (UniqueName: \"kubernetes.io/projected/9b79d203-f1c7-4523-9d97-51181cdb26d2-kube-api-access-hm9tm\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.870922 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-socket-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.870963 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-csi-data-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871008 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e88c0487-caa2-44ee-a139-33b289b9fc2d-serving-cert\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871101 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871219 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88c0487-caa2-44ee-a139-33b289b9fc2d-config\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871260 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-registration-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871356 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-mountpoint-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871388 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vsdcw\" (UniqueName: \"kubernetes.io/projected/e88c0487-caa2-44ee-a139-33b289b9fc2d-kube-api-access-vsdcw\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871424 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2mg25\" (UniqueName: \"kubernetes.io/projected/917a1c8b-59d5-4acb-8cef-91979326a7d1-kube-api-access-2mg25\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871447 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-plugins-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871887 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-plugins-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871977 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-socket-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.872050 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-csi-data-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.872525 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.372516866 +0000 UTC m=+122.648013796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.872629 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-registration-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.872674 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-mountpoint-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.886499 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjmvq\" (UniqueName: \"kubernetes.io/projected/fde8d9df-2e55-498d-acbe-7b5396cac5a7-kube-api-access-qjmvq\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.893781 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.906766 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525b7b06-ae33-4a3b-bf12-139bff69a17c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.917100 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.921890 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-96tjr"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.922933 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.938720 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5zr6\" (UniqueName: \"kubernetes.io/projected/99916b4a-423b-4db6-a912-cc2ef585eab3-kube-api-access-z5zr6\") pod \"multus-admission-controller-69db94689b-wb8mw\" (UID: \"99916b4a-423b-4db6-a912-cc2ef585eab3\") " pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.956848 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.967843 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-client\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.973533 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.974056 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbnpq\" (UniqueName: \"kubernetes.io/projected/ec9d7fc9-2385-408d-87f0-f2efafa41865-kube-api-access-vbnpq\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.974143 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ec9d7fc9-2385-408d-87f0-f2efafa41865-certs\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.974273 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ec9d7fc9-2385-408d-87f0-f2efafa41865-node-bootstrap-token\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.975109 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.475081302 +0000 UTC m=+122.750578222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.991651 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:02.997523 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hdgn\" (UniqueName: \"kubernetes.io/projected/97af9c02-0ff8-4146-9313-f3ecc17e1faa-kube-api-access-8hdgn\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:02.998818 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-apiservice-cert\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.002445 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-webhook-cert\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.002681 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.011511 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.018729 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.034233 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.034385 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-srv-cert\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.044788 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.044970 5108 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-xtqwv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.045032 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.045123 5108 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-fc5pz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.045244 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.055807 5108 patch_prober.go:28] interesting pod/console-operator-67c89758df-znc99 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.055884 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-znc99" podUID="dace4fd5-2d12-4c11-8252-9ac7426f870b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.056715 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.062525 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.063526 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-signing-key\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.070362 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.071358 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-signing-cabundle\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.075796 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f864fdce-3b6b-4ba2-9159-12c2d21f2601-metrics-tls\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.075862 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vbnpq\" (UniqueName: \"kubernetes.io/projected/ec9d7fc9-2385-408d-87f0-f2efafa41865-kube-api-access-vbnpq\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.075888 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ec9d7fc9-2385-408d-87f0-f2efafa41865-certs\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.075911 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f864fdce-3b6b-4ba2-9159-12c2d21f2601-tmp-dir\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.075950 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ec9d7fc9-2385-408d-87f0-f2efafa41865-node-bootstrap-token\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.075994 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f864fdce-3b6b-4ba2-9159-12c2d21f2601-config-volume\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.076022 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.076093 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24xv2\" (UniqueName: \"kubernetes.io/projected/f864fdce-3b6b-4ba2-9159-12c2d21f2601-kube-api-access-24xv2\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.076615 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.57660146 +0000 UTC m=+122.852098390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.090624 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.111913 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: W0202 00:12:03.118948 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8285a46b_171e_4c8c_ba54_5ab062df76fc.slice/crio-791e301889cececb220b16971e4a6f533193ec24be50cd2c08fffccb59186f0d WatchSource:0}: Error finding container 791e301889cececb220b16971e4a6f533193ec24be50cd2c08fffccb59186f0d: Status 404 returned error can't find the container with id 791e301889cececb220b16971e4a6f533193ec24be50cd2c08fffccb59186f0d Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.152246 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.165628 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.169853 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.172556 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkv7s\" (UniqueName: \"kubernetes.io/projected/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-kube-api-access-pkv7s\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.178236 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.178507 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f864fdce-3b6b-4ba2-9159-12c2d21f2601-metrics-tls\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.178683 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f864fdce-3b6b-4ba2-9159-12c2d21f2601-tmp-dir\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.178875 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f864fdce-3b6b-4ba2-9159-12c2d21f2601-config-volume\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.179080 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-24xv2\" (UniqueName: \"kubernetes.io/projected/f864fdce-3b6b-4ba2-9159-12c2d21f2601-kube-api-access-24xv2\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.180627 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.680599134 +0000 UTC m=+122.956096064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.183531 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f864fdce-3b6b-4ba2-9159-12c2d21f2601-tmp-dir\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.184655 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c1108f2-209c-4d4c-affc-fe8fbfd27cca-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-f55br\" (UID: \"2c1108f2-209c-4d4c-affc-fe8fbfd27cca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.207451 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdkfm\" (UniqueName: \"kubernetes.io/projected/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-kube-api-access-pdkfm\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.230887 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.231309 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-22jjj\" (UniqueName: \"kubernetes.io/projected/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-kube-api-access-22jjj\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.235547 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e88c0487-caa2-44ee-a139-33b289b9fc2d-serving-cert\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.250816 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.256221 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.271538 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.280812 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.281591 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.781565428 +0000 UTC m=+123.057062358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.283243 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88c0487-caa2-44ee-a139-33b289b9fc2d-config\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284769 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284819 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" event={"ID":"1f2e75fc-5a21-4f73-8f4c-050eb27c0601","Type":"ContainerStarted","Data":"8b65ab51da705077a5b8b44a4f073f7d26a5c0631e765f8986cab314207c4b66"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284855 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284870 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284887 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284899 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" event={"ID":"2b96d2a0-be27-428e-8bfd-f78a09feb756","Type":"ContainerStarted","Data":"c619e269574a614e62448d9cf83c047a7af481334875a4db06f4bbca0e0f66c9"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284911 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" event={"ID":"74feb297-18d1-4e3a-b077-779e202c89da","Type":"ContainerStarted","Data":"e96f8487c83ebffa4028aeab0a1061c0237488349f54c375ff6e0f49b7bf4245"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284923 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284936 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-4zcv5"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284947 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284958 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284970 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285472 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285645 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285693 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmvtw"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285707 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-wb8mw"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285735 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285755 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285770 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285786 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hnl48"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285843 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-fn572"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285860 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285902 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.287551 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ng2x6"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.294534 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.312479 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.314770 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.344469 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.351542 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.381621 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.387586 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.387988 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.887944385 +0000 UTC m=+123.163441315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.388250 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66ac186f-bc25-4f39-9d7b-394d9683b5c4-cert\") pod \"ingress-canary-96tjr\" (UID: \"66ac186f-bc25-4f39-9d7b-394d9683b5c4\") " pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.388310 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phzpm\" (UniqueName: \"kubernetes.io/projected/66ac186f-bc25-4f39-9d7b-394d9683b5c4-kube-api-access-phzpm\") pod \"ingress-canary-96tjr\" (UID: \"66ac186f-bc25-4f39-9d7b-394d9683b5c4\") " pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.388536 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.389022 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.889015274 +0000 UTC m=+123.164512204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.393696 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.408812 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.442029 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.442344 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gscz\" (UniqueName: \"kubernetes.io/projected/2c1108f2-209c-4d4c-affc-fe8fbfd27cca-kube-api-access-7gscz\") pod \"package-server-manager-77f986bd66-f55br\" (UID: \"2c1108f2-209c-4d4c-affc-fe8fbfd27cca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.448694 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.448837 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.450254 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.460556 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ec9d7fc9-2385-408d-87f0-f2efafa41865-certs\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.475408 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.475427 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.481029 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ec9d7fc9-2385-408d-87f0-f2efafa41865-node-bootstrap-token\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.493116 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.493876 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66ac186f-bc25-4f39-9d7b-394d9683b5c4-cert\") pod \"ingress-canary-96tjr\" (UID: \"66ac186f-bc25-4f39-9d7b-394d9683b5c4\") " pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.493913 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-phzpm\" (UniqueName: \"kubernetes.io/projected/66ac186f-bc25-4f39-9d7b-394d9683b5c4-kube-api-access-phzpm\") pod \"ingress-canary-96tjr\" (UID: \"66ac186f-bc25-4f39-9d7b-394d9683b5c4\") " pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.494058 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.994024784 +0000 UTC m=+123.269521704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.506494 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.530305 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mg25\" (UniqueName: \"kubernetes.io/projected/917a1c8b-59d5-4acb-8cef-91979326a7d1-kube-api-access-2mg25\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:03 crc kubenswrapper[5108]: W0202 00:12:03.544481 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod525b7b06_ae33_4a3b_bf12_139bff69a17c.slice/crio-6a48d414bcfe9515708c203fe3df2d2dd06d62582c8454774bed04da6a3d575e WatchSource:0}: Error finding container 6a48d414bcfe9515708c203fe3df2d2dd06d62582c8454774bed04da6a3d575e: Status 404 returned error can't find the container with id 6a48d414bcfe9515708c203fe3df2d2dd06d62582c8454774bed04da6a3d575e Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.550695 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsdcw\" (UniqueName: \"kubernetes.io/projected/e88c0487-caa2-44ee-a139-33b289b9fc2d-kube-api-access-vsdcw\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.553935 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.575527 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.592042 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f864fdce-3b6b-4ba2-9159-12c2d21f2601-metrics-tls\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.597836 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.602295 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.602626 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.10261328 +0000 UTC m=+123.378110210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.611572 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.627404 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.628533 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651293 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-96tjr"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651754 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q9bzk"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651810 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29499840-njc6g"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651823 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651834 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fc5pz"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651844 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-wbv6f"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651853 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-q88tw"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651862 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-fn572"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651871 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651880 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651890 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-9pw49"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651899 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-4lq2m"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651910 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651919 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651928 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-znc99"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651593 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f864fdce-3b6b-4ba2-9159-12c2d21f2601-config-volume\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.658861 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbnpq\" (UniqueName: \"kubernetes.io/projected/ec9d7fc9-2385-408d-87f0-f2efafa41865-kube-api-access-vbnpq\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.662870 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.662983 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-cvtnf"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663020 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663034 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-x5pzk"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663047 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663061 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663100 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663122 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmvtw"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663135 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663148 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663179 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663195 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663208 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663219 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-wb8mw"] Feb 02 00:12:03 crc kubenswrapper[5108]: W0202 00:12:03.689255 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99916b4a_423b_4db6_a912_cc2ef585eab3.slice/crio-29f19b6a3da71cd59a6a3c1958574f4a99b12428aafecd321c1e41ec850119a9 WatchSource:0}: Error finding container 29f19b6a3da71cd59a6a3c1958574f4a99b12428aafecd321c1e41ec850119a9: Status 404 returned error can't find the container with id 29f19b6a3da71cd59a6a3c1958574f4a99b12428aafecd321c1e41ec850119a9 Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.695709 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.697483 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-24xv2\" (UniqueName: \"kubernetes.io/projected/f864fdce-3b6b-4ba2-9159-12c2d21f2601-kube-api-access-24xv2\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.698325 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.709694 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.709906 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.209866699 +0000 UTC m=+123.485363629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.710044 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.710608 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.711057 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.211049241 +0000 UTC m=+123.486546171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.728660 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.741752 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66ac186f-bc25-4f39-9d7b-394d9683b5c4-cert\") pod \"ingress-canary-96tjr\" (UID: \"66ac186f-bc25-4f39-9d7b-394d9683b5c4\") " pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.756783 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.757057 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.790341 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-phzpm\" (UniqueName: \"kubernetes.io/projected/66ac186f-bc25-4f39-9d7b-394d9683b5c4-kube-api-access-phzpm\") pod \"ingress-canary-96tjr\" (UID: \"66ac186f-bc25-4f39-9d7b-394d9683b5c4\") " pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.791370 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.802193 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnj69\" (UniqueName: \"kubernetes.io/projected/45594040-ee30-4578-aa8c-a9e8ef858c06-kube-api-access-lnj69\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.811758 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.812439 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.812700 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.812725 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.812773 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-ready\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.812802 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xl46\" (UniqueName: \"kubernetes.io/projected/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-kube-api-access-2xl46\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.812956 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.312935389 +0000 UTC m=+123.588432319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.818891 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.832799 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" event={"ID":"00c9b96f-70c1-47b2-ab2f-570c9911ecaf","Type":"ContainerStarted","Data":"b0bd1b187bbbb754f27cfef12a7d5f1cbe1ee9daf4aa8ec0180b3caefdcfff4b"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.834054 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" event={"ID":"031f8213-ba02-4add-9d14-c3a995a10fa9","Type":"ContainerStarted","Data":"ad774d57500bb9e0fc53f27ff35acb3a77561017af7111c7a796200ffd8f6057"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.840692 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" event={"ID":"03927a55-b629-4f9c-be0f-3499aba5b90e","Type":"ContainerStarted","Data":"83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.842241 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" event={"ID":"59650315-e011-493f-bbf9-c20555ea6025","Type":"ContainerStarted","Data":"dd8c1237f4b0cfcc2014cd3f28fdafcb2c7160092996df3277435c1949c25268"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.843872 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" event={"ID":"97af9c02-0ff8-4146-9313-f3ecc17e1faa","Type":"ContainerStarted","Data":"62c614918ea1ed767fd2378cc41eb8537204d35f7925c249c531c0a38e787b9c"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.845269 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" event={"ID":"8285a46b-171e-4c8c-ba54-5ab062df76fc","Type":"ContainerStarted","Data":"791e301889cececb220b16971e4a6f533193ec24be50cd2c08fffccb59186f0d"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.845660 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.847926 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" event={"ID":"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26","Type":"ContainerStarted","Data":"b1ce024c5139d6ed5da0f595f77dab589e4936242aebf05079321a106b535522"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.850944 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" event={"ID":"99916b4a-423b-4db6-a912-cc2ef585eab3","Type":"ContainerStarted","Data":"29f19b6a3da71cd59a6a3c1958574f4a99b12428aafecd321c1e41ec850119a9"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.853254 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" event={"ID":"4c22e3c9-f940-436c-bcd4-0ae77d143061","Type":"ContainerStarted","Data":"da6479b86cb53a1cf69d2886a6f1e2e95b22fffbe0a1c7f6a8a87775b99f4e8f"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.854273 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" event={"ID":"9b79d203-f1c7-4523-9d97-51181cdb26d2","Type":"ContainerStarted","Data":"65f94510318c4561e69d3f97ae53f9b1e6bbb466ebed5d4c3b077af1ba4d4a03"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.856134 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" event={"ID":"525b7b06-ae33-4a3b-bf12-139bff69a17c","Type":"ContainerStarted","Data":"6a48d414bcfe9515708c203fe3df2d2dd06d62582c8454774bed04da6a3d575e"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.857222 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" event={"ID":"fde8d9df-2e55-498d-acbe-7b5396cac5a7","Type":"ContainerStarted","Data":"f12daf5a7b7ac4781a26b5f15ef59738c0f6b8cdc640c762e6bd96095474a7a0"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.859529 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" event={"ID":"7f60e56b-3881-49ee-be41-5435327c1be3","Type":"ContainerStarted","Data":"b13ed7e02312952627a8fe290f3f42545cea89e59d6401fe8e6ee3b38f6bedcd"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.864414 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" event={"ID":"688cb527-1d6f-4e22-9b14-4718201c8343","Type":"ContainerStarted","Data":"97a2863c8e5866afb11a484a683b6301f14173c4c8442a743c64cb4d5adb897a"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.867133 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" event={"ID":"2b96d2a0-be27-428e-8bfd-f78a09feb756","Type":"ContainerStarted","Data":"27aadd57983610ac0f185271929402ea50f3644923e9bd626982607ee695c627"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.869799 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.907025 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.915534 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.915575 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.915606 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.915655 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-ready\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.915688 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2xl46\" (UniqueName: \"kubernetes.io/projected/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-kube-api-access-2xl46\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.916841 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.41682531 +0000 UTC m=+123.692322240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.917125 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-ready\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.917361 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.917748 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.932151 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.937799 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.953280 5108 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-xtqwv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.953359 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.953726 5108 patch_prober.go:28] interesting pod/console-operator-67c89758df-znc99 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.953776 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-znc99" podUID="dace4fd5-2d12-4c11-8252-9ac7426f870b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.954223 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.956412 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.969810 5108 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-4lq2m container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.969873 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.973847 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xl46\" (UniqueName: \"kubernetes.io/projected/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-kube-api-access-2xl46\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.023573 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.024857 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.025492 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.525466647 +0000 UTC m=+123.800963577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.026631 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.029068 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.529052222 +0000 UTC m=+123.804549152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.099512 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-cp5z2"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.129338 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.129594 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.629551063 +0000 UTC m=+123.905047993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.129860 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.131165 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.630306283 +0000 UTC m=+123.905803213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.149602 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.182170 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q9bzk"] Feb 02 00:12:04 crc kubenswrapper[5108]: W0202 00:12:04.222917 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf864fdce_3b6b_4ba2_9159_12c2d21f2601.slice/crio-7b6f5012e8545a6b7e326c4421cb54a0b6bb10953eb043b7928fc48371d20573 WatchSource:0}: Error finding container 7b6f5012e8545a6b7e326c4421cb54a0b6bb10953eb043b7928fc48371d20573: Status 404 returned error can't find the container with id 7b6f5012e8545a6b7e326c4421cb54a0b6bb10953eb043b7928fc48371d20573 Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.230589 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.231120 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.731100992 +0000 UTC m=+124.006597922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.231691 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-4zcv5"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.246671 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.253871 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.257394 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hnl48"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.258900 5108 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-fc5pz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.258956 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 00:12:04 crc kubenswrapper[5108]: W0202 00:12:04.295222 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod917a1c8b_59d5_4acb_8cef_91979326a7d1.slice/crio-b79cbc0218e66151d7be64102ab45349368b39bd4198715de5bc685403d11b11 WatchSource:0}: Error finding container b79cbc0218e66151d7be64102ab45349368b39bd4198715de5bc685403d11b11: Status 404 returned error can't find the container with id b79cbc0218e66151d7be64102ab45349368b39bd4198715de5bc685403d11b11 Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.315573 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-96tjr"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.328348 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.333762 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.334194 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.834175711 +0000 UTC m=+124.109672641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.352995 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-znc99" podStartSLOduration=100.352965289 podStartE2EDuration="1m40.352965289s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.350054102 +0000 UTC m=+123.625551052" watchObservedRunningTime="2026-02-02 00:12:04.352965289 +0000 UTC m=+123.628462219" Feb 02 00:12:04 crc kubenswrapper[5108]: W0202 00:12:04.355007 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45594040_ee30_4578_aa8c_a9e8ef858c06.slice/crio-8ebdc02e0d431e12bc244bb0960fe851c5d91116385d0f23d9ad0a69c4cbfb2e WatchSource:0}: Error finding container 8ebdc02e0d431e12bc244bb0960fe851c5d91116385d0f23d9ad0a69c4cbfb2e: Status 404 returned error can't find the container with id 8ebdc02e0d431e12bc244bb0960fe851c5d91116385d0f23d9ad0a69c4cbfb2e Feb 02 00:12:04 crc kubenswrapper[5108]: W0202 00:12:04.373066 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66ac186f_bc25_4f39_9d7b_394d9683b5c4.slice/crio-d338299e0a43d133d38e00f771194afb0bbe5cbc1ea6345a676cc0e14d25ce81 WatchSource:0}: Error finding container d338299e0a43d133d38e00f771194afb0bbe5cbc1ea6345a676cc0e14d25ce81: Status 404 returned error can't find the container with id d338299e0a43d133d38e00f771194afb0bbe5cbc1ea6345a676cc0e14d25ce81 Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.383925 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" podStartSLOduration=100.383902929 podStartE2EDuration="1m40.383902929s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.382494251 +0000 UTC m=+123.657991191" watchObservedRunningTime="2026-02-02 00:12:04.383902929 +0000 UTC m=+123.659399859" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.436173 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.438031 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.93798765 +0000 UTC m=+124.213484700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.470160 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-9pw49" podStartSLOduration=100.469992018 podStartE2EDuration="1m40.469992018s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.424957255 +0000 UTC m=+123.700454195" watchObservedRunningTime="2026-02-02 00:12:04.469992018 +0000 UTC m=+123.745488948" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.470829 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" podStartSLOduration=100.47082236 podStartE2EDuration="1m40.47082236s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.46668086 +0000 UTC m=+123.742177810" watchObservedRunningTime="2026-02-02 00:12:04.47082236 +0000 UTC m=+123.746319310" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.538600 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.539123 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.039101568 +0000 UTC m=+124.314598498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.602267 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" podStartSLOduration=100.60221666 podStartE2EDuration="1m40.60221666s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.600182755 +0000 UTC m=+123.875679705" watchObservedRunningTime="2026-02-02 00:12:04.60221666 +0000 UTC m=+123.877713590" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.642422 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.643513 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.143490812 +0000 UTC m=+124.418987732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.669453 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" podStartSLOduration=100.669424579 podStartE2EDuration="1m40.669424579s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.669367428 +0000 UTC m=+123.944864368" watchObservedRunningTime="2026-02-02 00:12:04.669424579 +0000 UTC m=+123.944921509" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.671614 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" podStartSLOduration=100.671603996 podStartE2EDuration="1m40.671603996s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.643809491 +0000 UTC m=+123.919306421" watchObservedRunningTime="2026-02-02 00:12:04.671603996 +0000 UTC m=+123.947100926" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.745280 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.746095 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.246075519 +0000 UTC m=+124.521572449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.847853 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.848284 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.348259315 +0000 UTC m=+124.623756255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.952280 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.952900 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.452885125 +0000 UTC m=+124.728382055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.967437 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" event={"ID":"031f8213-ba02-4add-9d14-c3a995a10fa9","Type":"ContainerStarted","Data":"741a5e5e4e18e911dfdf2b5e5840f16e6e43ba4ef72fb2c29fc2eb7ff1366738"} Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.975704 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" event={"ID":"917a1c8b-59d5-4acb-8cef-91979326a7d1","Type":"ContainerStarted","Data":"b79cbc0218e66151d7be64102ab45349368b39bd4198715de5bc685403d11b11"} Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.986508 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-824d7" event={"ID":"ec9d7fc9-2385-408d-87f0-f2efafa41865","Type":"ContainerStarted","Data":"78c58d4a935e815abdd2e20984f52c1eaa78f43fae88002c7ebb39a86e404bae"} Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.999122 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" event={"ID":"e1b2e108-2c25-4942-b6bb-9bd186134bc9","Type":"ContainerStarted","Data":"1450de438627cfa7f452b819a62d30550c58c5bf1ace61b9ff8d1a16c6e3b0fd"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.000003 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" event={"ID":"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b","Type":"ContainerStarted","Data":"b7ccd63409a2599caa2a1d6a430c1e67af5f138dd3ea1e54d57df99b1d6cd73a"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.045789 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" event={"ID":"8eb5f446-9d16-4ceb-9bb7-9424862cac0b","Type":"ContainerStarted","Data":"62d317867d108f124247eb8b10471272b2750ecc456cabdbefea82582a812a80"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.050268 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" event={"ID":"45594040-ee30-4578-aa8c-a9e8ef858c06","Type":"ContainerStarted","Data":"8ebdc02e0d431e12bc244bb0960fe851c5d91116385d0f23d9ad0a69c4cbfb2e"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.055125 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.056001 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.555967335 +0000 UTC m=+124.831464265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.078802 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" event={"ID":"74feb297-18d1-4e3a-b077-779e202c89da","Type":"ContainerStarted","Data":"0d746d8307495c32c04d459e3e2b91eee5fe17d31030d4b3c91e36e38c6c3719"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.093456 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-cp5z2" event={"ID":"07d89198-8b8e-4edc-96b8-05b6df5194f6","Type":"ContainerStarted","Data":"7a39e1408001c53856587460b4d183f2cf618151452c8a7a0807f54727156f95"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.135967 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" event={"ID":"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc","Type":"ContainerStarted","Data":"1aa404549b640839622a136e00b6e6737a73ccef583bff3181d7596c2ec8172a"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.139553 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" event={"ID":"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea","Type":"ContainerStarted","Data":"a690ca3b87e2acc517b911a3d4d89655c668e5f49e99639f67dc29c1433087c2"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.141782 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" podStartSLOduration=101.141765576 podStartE2EDuration="1m41.141765576s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.117454783 +0000 UTC m=+124.392951723" watchObservedRunningTime="2026-02-02 00:12:05.141765576 +0000 UTC m=+124.417262506" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.147249 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fmvtw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.147324 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.147739 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.157508 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.158125 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.658100709 +0000 UTC m=+124.933597639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.177786 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" event={"ID":"64332d15-ee3f-4864-9165-3217a06b24c2","Type":"ContainerStarted","Data":"7b205ac45f41d5119940bc7240d1e8443f3a97c3700d1a88f8136ad1ebb839b9"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.247555 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" event={"ID":"8285a46b-171e-4c8c-ba54-5ab062df76fc","Type":"ContainerStarted","Data":"f55a73b195fc4ff73f7a158b317a4f091e335545c9c7fff202d86972324de8ba"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.259338 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" event={"ID":"8490096f-f230-4160-bb09-338c9fa9f7ca","Type":"ContainerStarted","Data":"35313905bb44ab9622887349e6e479da86c5011d92c1de20652791877e17021c"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.261896 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.262199 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.762125154 +0000 UTC m=+125.037622084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.262741 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.265748 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.765714489 +0000 UTC m=+125.041211419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.285806 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" event={"ID":"d7088c96-1022-40ff-a06c-f6c299744e3a","Type":"ContainerStarted","Data":"dde296639123c62a01bda198e41a2bd13f137ade7edb20b694d143a8922fecc1"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.288550 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" event={"ID":"e88c0487-caa2-44ee-a139-33b289b9fc2d","Type":"ContainerStarted","Data":"438263aeffcc2b8c337156661ccdd1797999eed07f6abc5078bb6dbb25881e45"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.290988 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" event={"ID":"9b79d203-f1c7-4523-9d97-51181cdb26d2","Type":"ContainerStarted","Data":"a91f2539c3eeb2902b7397333b66a15d7ebbbfe0ac5d8d5309bc7b7fcfb4537b"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.292688 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q9bzk" event={"ID":"f864fdce-3b6b-4ba2-9159-12c2d21f2601","Type":"ContainerStarted","Data":"7b6f5012e8545a6b7e326c4421cb54a0b6bb10953eb043b7928fc48371d20573"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.295726 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" event={"ID":"2c1108f2-209c-4d4c-affc-fe8fbfd27cca","Type":"ContainerStarted","Data":"157a6d6b4750cd8ba0d89b89e59f900aebf3db15d8fccfe93a51655962608c6d"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.298342 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" event={"ID":"27d783b3-6f7d-4f4d-b054-225bfcb98fd5","Type":"ContainerStarted","Data":"5911c7cd1065babf88a5cddc507d2c4086da750449615ffbe4c0743188f2ef3a"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.300336 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29499840-njc6g" podStartSLOduration=101.300322486 podStartE2EDuration="1m41.300322486s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.299702059 +0000 UTC m=+124.575198999" watchObservedRunningTime="2026-02-02 00:12:05.300322486 +0000 UTC m=+124.575819416" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.304868 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46268: no serving certificate available for the kubelet" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.328723 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-96tjr" event={"ID":"66ac186f-bc25-4f39-9d7b-394d9683b5c4","Type":"ContainerStarted","Data":"d338299e0a43d133d38e00f771194afb0bbe5cbc1ea6345a676cc0e14d25ce81"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.332922 5108 generic.go:358] "Generic (PLEG): container finished" podID="2b96d2a0-be27-428e-8bfd-f78a09feb756" containerID="27aadd57983610ac0f185271929402ea50f3644923e9bd626982607ee695c627" exitCode=0 Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.333021 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" event={"ID":"2b96d2a0-be27-428e-8bfd-f78a09feb756","Type":"ContainerDied","Data":"27aadd57983610ac0f185271929402ea50f3644923e9bd626982607ee695c627"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.336022 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-4zcv5" event={"ID":"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2","Type":"ContainerStarted","Data":"c7422d62d76a89e9d61974b53e17891502d757d0a2000d16d9c1867ba87f128f"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.344027 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" podStartSLOduration=101.344003323 podStartE2EDuration="1m41.344003323s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.342848981 +0000 UTC m=+124.618345931" watchObservedRunningTime="2026-02-02 00:12:05.344003323 +0000 UTC m=+124.619500263" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.372140 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.374424 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.874398757 +0000 UTC m=+125.149895697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.382941 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46274: no serving certificate available for the kubelet" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.474511 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.476801 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.976779448 +0000 UTC m=+125.252276378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.477936 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46278: no serving certificate available for the kubelet" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.544095 5108 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-4lq2m container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.544541 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.564100 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=45.56407492 podStartE2EDuration="45.56407492s" podCreationTimestamp="2026-02-02 00:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.563243908 +0000 UTC m=+124.838740848" watchObservedRunningTime="2026-02-02 00:12:05.56407492 +0000 UTC m=+124.839571840" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.575857 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.579693 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.079664733 +0000 UTC m=+125.355161663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.587980 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46282: no serving certificate available for the kubelet" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.611571 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.617243 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" podStartSLOduration=101.617204707 podStartE2EDuration="1m41.617204707s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.608590939 +0000 UTC m=+124.884087879" watchObservedRunningTime="2026-02-02 00:12:05.617204707 +0000 UTC m=+124.892701637" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.681510 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46294: no serving certificate available for the kubelet" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.683033 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.683493 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.183476352 +0000 UTC m=+125.458973282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.707912 5108 patch_prober.go:28] interesting pod/console-operator-67c89758df-znc99 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.707986 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-znc99" podUID="dace4fd5-2d12-4c11-8252-9ac7426f870b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.784059 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.784533 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.284496917 +0000 UTC m=+125.559993847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.784895 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.787016 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.286994383 +0000 UTC m=+125.562491443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.803895 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46298: no serving certificate available for the kubelet" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.806646 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" podStartSLOduration=101.806620103 podStartE2EDuration="1m41.806620103s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.753451715 +0000 UTC m=+125.028948655" watchObservedRunningTime="2026-02-02 00:12:05.806620103 +0000 UTC m=+125.082117033" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.807370 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" podStartSLOduration=101.807362782 podStartE2EDuration="1m41.807362782s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.80313611 +0000 UTC m=+125.078633050" watchObservedRunningTime="2026-02-02 00:12:05.807362782 +0000 UTC m=+125.082859712" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.827664 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" podStartSLOduration=101.827647539 podStartE2EDuration="1m41.827647539s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.825677367 +0000 UTC m=+125.101174317" watchObservedRunningTime="2026-02-02 00:12:05.827647539 +0000 UTC m=+125.103144469" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.870209 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.870738 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.874218 5108 patch_prober.go:28] interesting pod/apiserver-8596bd845d-fn572 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.40:8443/livez\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.874288 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" podUID="8eb5f446-9d16-4ceb-9bb7-9424862cac0b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.40:8443/livez\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.886780 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.887023 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.38697835 +0000 UTC m=+125.662475280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.888022 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.894830 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.895763 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.395742892 +0000 UTC m=+125.671239822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.901741 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.901826 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.901899 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.907471 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" podStartSLOduration=101.907449823 podStartE2EDuration="1m41.907449823s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.869723303 +0000 UTC m=+125.145220253" watchObservedRunningTime="2026-02-02 00:12:05.907449823 +0000 UTC m=+125.182946743" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.947071 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" podStartSLOduration=101.947050451 podStartE2EDuration="1m41.947050451s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.909370453 +0000 UTC m=+125.184867393" watchObservedRunningTime="2026-02-02 00:12:05.947050451 +0000 UTC m=+125.222547381" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.989833 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.990083 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" podStartSLOduration=101.99006325 podStartE2EDuration="1m41.99006325s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.949971449 +0000 UTC m=+125.225468389" watchObservedRunningTime="2026-02-02 00:12:05.99006325 +0000 UTC m=+125.265560180" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.990581 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.490559233 +0000 UTC m=+125.766056163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.991027 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.991530 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.491522149 +0000 UTC m=+125.767019079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.026774 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46312: no serving certificate available for the kubelet" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.028907 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podStartSLOduration=102.028877878 podStartE2EDuration="1m42.028877878s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.994697883 +0000 UTC m=+125.270194813" watchObservedRunningTime="2026-02-02 00:12:06.028877878 +0000 UTC m=+125.304374808" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.092396 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.092838 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.592813611 +0000 UTC m=+125.868310541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.194243 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.194713 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.694692209 +0000 UTC m=+125.970189139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.296566 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.296877 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.796841964 +0000 UTC m=+126.072338894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.297258 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.297659 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.797639795 +0000 UTC m=+126.073136795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.386666 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-96tjr" event={"ID":"66ac186f-bc25-4f39-9d7b-394d9683b5c4","Type":"ContainerStarted","Data":"710f09f87b57c061ef933ae0ed00cf0c1ff29fc614b75e2305f43b0293a4e770"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.401262 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.401439 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.901412933 +0000 UTC m=+126.176909863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.403512 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.404423 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.904413102 +0000 UTC m=+126.179910032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.415542 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46326: no serving certificate available for the kubelet" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.462299 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" event={"ID":"2b96d2a0-be27-428e-8bfd-f78a09feb756","Type":"ContainerStarted","Data":"0ab98fc00cb1c3500402e04faf4806bbbff16e8d22e3529c3633a861ce522222"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.463359 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.484347 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-4zcv5" event={"ID":"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2","Type":"ContainerStarted","Data":"72c18e1195a5618901ea2deb8ab5d9bb93c1bf64d972a0f52ea04a01a867f558"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.498704 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" podStartSLOduration=102.498680519 podStartE2EDuration="1m42.498680519s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.496973293 +0000 UTC m=+125.772470253" watchObservedRunningTime="2026-02-02 00:12:06.498680519 +0000 UTC m=+125.774177449" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.502151 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-824d7" event={"ID":"ec9d7fc9-2385-408d-87f0-f2efafa41865","Type":"ContainerStarted","Data":"b7f60eaf52b9ef737f604bec27ca8c5d4bbceeb1fdea44d068cd1fb672e28543"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.505013 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-96tjr" podStartSLOduration=7.504978595 podStartE2EDuration="7.504978595s" podCreationTimestamp="2026-02-02 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.413603265 +0000 UTC m=+125.689100205" watchObservedRunningTime="2026-02-02 00:12:06.504978595 +0000 UTC m=+125.780475525" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.508609 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.510257 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.010215984 +0000 UTC m=+126.285713064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.526781 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" event={"ID":"e1b2e108-2c25-4942-b6bb-9bd186134bc9","Type":"ContainerStarted","Data":"e3e212c1a907b06a08c0874a4a7e782b1cd96348ead4ad845896371accc2b9fc"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.536987 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-4zcv5" podStartSLOduration=102.536951021 podStartE2EDuration="1m42.536951021s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.534297101 +0000 UTC m=+125.809794041" watchObservedRunningTime="2026-02-02 00:12:06.536951021 +0000 UTC m=+125.812447941" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.548759 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" event={"ID":"97af9c02-0ff8-4146-9313-f3ecc17e1faa","Type":"ContainerStarted","Data":"c52202087275d9c392ee71614a4bdc7280f0da97e4aab336cd55eefc8f9f9cce"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.549752 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.559611 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-824d7" podStartSLOduration=7.55957679 podStartE2EDuration="7.55957679s" podCreationTimestamp="2026-02-02 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.556050987 +0000 UTC m=+125.831547927" watchObservedRunningTime="2026-02-02 00:12:06.55957679 +0000 UTC m=+125.835073720" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.560106 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" event={"ID":"4c22e3c9-f940-436c-bcd4-0ae77d143061","Type":"ContainerStarted","Data":"3d130c51810ca80c8780d85b3a1c6ab4108d688cf06fe845ba58d55f74cd48e4"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.569125 5108 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-mztxr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.569295 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" podUID="97af9c02-0ff8-4146-9313-f3ecc17e1faa" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.572971 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" event={"ID":"fde8d9df-2e55-498d-acbe-7b5396cac5a7","Type":"ContainerStarted","Data":"324379fbc8b1e9fd64f4683e4a6f6d22089fc5a80820695f409c29698e844409"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.597744 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" event={"ID":"7f60e56b-3881-49ee-be41-5435327c1be3","Type":"ContainerStarted","Data":"17a3c312150e2ad187bcb50ece3a0a3479395c7e181149518d0b3bec568dcd5a"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.598883 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fmvtw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.598918 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.616918 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.116897939 +0000 UTC m=+126.392394869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.613990 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.631387 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" podStartSLOduration=102.631359952 podStartE2EDuration="1m42.631359952s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.596035146 +0000 UTC m=+125.871532086" watchObservedRunningTime="2026-02-02 00:12:06.631359952 +0000 UTC m=+125.906856882" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.644027 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-cp5z2" event={"ID":"07d89198-8b8e-4edc-96b8-05b6df5194f6","Type":"ContainerStarted","Data":"eeda0735367749aa2e538d9f6b415570b629014d0b7c343ab8f25cae42b998ed"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.645273 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.693850 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.693969 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.694353 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" event={"ID":"00c9b96f-70c1-47b2-ab2f-570c9911ecaf","Type":"ContainerStarted","Data":"1bc48f23bfa642442e677ada079579d82166f03d7ac885c09d39584358fdd49a"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.685205 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" podStartSLOduration=102.685168006 podStartE2EDuration="1m42.685168006s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.639689562 +0000 UTC m=+125.915186502" watchObservedRunningTime="2026-02-02 00:12:06.685168006 +0000 UTC m=+125.960664936" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.731860 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" podStartSLOduration=102.731842553 podStartE2EDuration="1m42.731842553s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.70870311 +0000 UTC m=+125.984200040" watchObservedRunningTime="2026-02-02 00:12:06.731842553 +0000 UTC m=+126.007339483" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.736660 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.738054 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.238030626 +0000 UTC m=+126.513527556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.740657 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" podStartSLOduration=102.740633005 podStartE2EDuration="1m42.740633005s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.730482146 +0000 UTC m=+126.005979086" watchObservedRunningTime="2026-02-02 00:12:06.740633005 +0000 UTC m=+126.016129935" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.758774 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" event={"ID":"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc","Type":"ContainerStarted","Data":"a2a39167d6d7e6c0a2990e61e06142b9462e1998d955efeb9b8ebde09a404a54"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.760188 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.767866 5108 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-h2slm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.767914 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" podUID="f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.772476 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-cp5z2" podStartSLOduration=102.772464238 podStartE2EDuration="1m42.772464238s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.770858646 +0000 UTC m=+126.046355596" watchObservedRunningTime="2026-02-02 00:12:06.772464238 +0000 UTC m=+126.047961168" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.796650 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" event={"ID":"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea","Type":"ContainerStarted","Data":"9b218b76fc3cfb3ac69f22ca94617bf588dd68acb2fedc57c3137ca671997ebf"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.797618 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.802378 5108 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-z28zc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.802430 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" podUID="6c411323-7b32-4e2b-a2b9-c6b63abeb1ea" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.802956 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" podStartSLOduration=102.802937715 podStartE2EDuration="1m42.802937715s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.800702286 +0000 UTC m=+126.076199226" watchObservedRunningTime="2026-02-02 00:12:06.802937715 +0000 UTC m=+126.078434645" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.821107 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" event={"ID":"64332d15-ee3f-4864-9165-3217a06b24c2","Type":"ContainerStarted","Data":"f746cfa1b226d194076a81bc09280df7d2ca9bc3bdc50fc530e2f5cafd0ed8cd"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.834162 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" event={"ID":"525b7b06-ae33-4a3b-bf12-139bff69a17c","Type":"ContainerStarted","Data":"4464db89cbe5f99d96c0c05963685a847dc480eba93dd746fa39e3752e5fafdb"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.834172 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" podStartSLOduration=102.834151581 podStartE2EDuration="1m42.834151581s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.831843621 +0000 UTC m=+126.107340561" watchObservedRunningTime="2026-02-02 00:12:06.834151581 +0000 UTC m=+126.109648511" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.844037 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.844725 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.344701001 +0000 UTC m=+126.620197921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.871594 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" event={"ID":"e88c0487-caa2-44ee-a139-33b289b9fc2d","Type":"ContainerStarted","Data":"c9c1510e0e7a2b73e6633080724d52fc81d26026d11a94f526c06cccdb9f97fe"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.872593 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" podStartSLOduration=102.872579899 podStartE2EDuration="1m42.872579899s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.871698846 +0000 UTC m=+126.147195776" watchObservedRunningTime="2026-02-02 00:12:06.872579899 +0000 UTC m=+126.148076819" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.892794 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" event={"ID":"99916b4a-423b-4db6-a912-cc2ef585eab3","Type":"ContainerStarted","Data":"9af916a3e2c690fac19e65958c6a59828797446e3c0964884e1bddea6549a167"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.893737 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" podStartSLOduration=102.893714949 podStartE2EDuration="1m42.893714949s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.893122273 +0000 UTC m=+126.168619213" watchObservedRunningTime="2026-02-02 00:12:06.893714949 +0000 UTC m=+126.169211879" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.900517 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q9bzk" event={"ID":"f864fdce-3b6b-4ba2-9159-12c2d21f2601","Type":"ContainerStarted","Data":"f581a964a78f67535ae45c2872fba7b71f95b64da206093e101568faeea41f9a"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.909095 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:06 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:06 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:06 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.909155 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.918718 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" event={"ID":"2c1108f2-209c-4d4c-affc-fe8fbfd27cca","Type":"ContainerStarted","Data":"7048b856a9bd96fe898a5dc34cd39cba8845962301bb19480b77681b35124f3b"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.919215 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.935219 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" podStartSLOduration=102.935189968 podStartE2EDuration="1m42.935189968s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.923634031 +0000 UTC m=+126.199130981" watchObservedRunningTime="2026-02-02 00:12:06.935189968 +0000 UTC m=+126.210686898" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.952390 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.952999 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.452956678 +0000 UTC m=+126.728453618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.966444 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" podStartSLOduration=102.966420165 podStartE2EDuration="1m42.966420165s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.965372766 +0000 UTC m=+126.240869706" watchObservedRunningTime="2026-02-02 00:12:06.966420165 +0000 UTC m=+126.241917095" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.976023 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" event={"ID":"27d783b3-6f7d-4f4d-b054-225bfcb98fd5","Type":"ContainerStarted","Data":"cdb5b6136f9949f2b96c7eb6c9309f9ff4a2452f2041a46b18ca06b2be9bcbbd"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.993050 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" podStartSLOduration=102.993025399 podStartE2EDuration="1m42.993025399s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.986858796 +0000 UTC m=+126.262355736" watchObservedRunningTime="2026-02-02 00:12:06.993025399 +0000 UTC m=+126.268522329" Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.015897 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" podStartSLOduration=103.015875164 podStartE2EDuration="1m43.015875164s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:07.014304313 +0000 UTC m=+126.289801253" watchObservedRunningTime="2026-02-02 00:12:07.015875164 +0000 UTC m=+126.291372084" Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.055055 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.055833 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.555791051 +0000 UTC m=+126.831287981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.098339 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46330: no serving certificate available for the kubelet" Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.159736 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.159973 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.659932269 +0000 UTC m=+126.935429199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.160192 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.160596 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.660580676 +0000 UTC m=+126.936077606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.261618 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.261856 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.761820556 +0000 UTC m=+127.037317486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.262039 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.262394 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.762386731 +0000 UTC m=+127.037883661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.346220 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.362960 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.363086 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.863058997 +0000 UTC m=+127.138555927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.363691 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.364075 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.864065364 +0000 UTC m=+127.139562294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.466008 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.466255 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.966204959 +0000 UTC m=+127.241701889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.466687 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.467130 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.967113642 +0000 UTC m=+127.242610572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.567294 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.567486 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.06745364 +0000 UTC m=+127.342950560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.567815 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.568159 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.068144528 +0000 UTC m=+127.343641458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.669099 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.669255 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.169180744 +0000 UTC m=+127.444677664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.669769 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.670176 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.17016315 +0000 UTC m=+127.445660080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.771003 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.771256 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.271202545 +0000 UTC m=+127.546699485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.771479 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.771843 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.271823232 +0000 UTC m=+127.547320242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.873299 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.873591 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.373552755 +0000 UTC m=+127.649049685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.873745 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.874103 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.37409348 +0000 UTC m=+127.649590410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.904201 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:07 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:07 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:07 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.904289 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.975936 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.976176 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.476135882 +0000 UTC m=+127.751632812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.976634 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.977043 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.477027655 +0000 UTC m=+127.752524585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.983932 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" event={"ID":"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b","Type":"ContainerStarted","Data":"fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4"} Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.984933 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.993204 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" event={"ID":"45594040-ee30-4578-aa8c-a9e8ef858c06","Type":"ContainerStarted","Data":"790fe13975c980ebcb7c76c8e69d8c4b5bd603664d7da8b1d08e4ed422450fae"} Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.999239 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" event={"ID":"74feb297-18d1-4e3a-b077-779e202c89da","Type":"ContainerStarted","Data":"004c454c509e890a028ad24ad5589c03a218efc7d31b0886bb5261bf27c9327b"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.008147 5108 generic.go:358] "Generic (PLEG): container finished" podID="8285a46b-171e-4c8c-ba54-5ab062df76fc" containerID="f55a73b195fc4ff73f7a158b317a4f091e335545c9c7fff202d86972324de8ba" exitCode=0 Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.008272 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" event={"ID":"8285a46b-171e-4c8c-ba54-5ab062df76fc","Type":"ContainerDied","Data":"f55a73b195fc4ff73f7a158b317a4f091e335545c9c7fff202d86972324de8ba"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.012015 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" event={"ID":"8490096f-f230-4160-bb09-338c9fa9f7ca","Type":"ContainerStarted","Data":"b908f275aae1aaf7c4c562e827fe1b58eaa6c5a439a4b12c6a5f9a93dd3d59dc"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.015943 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" event={"ID":"99916b4a-423b-4db6-a912-cc2ef585eab3","Type":"ContainerStarted","Data":"95ae580227d64e996fd6c4eb214373a572187c0e5e5ddc76ce8ae839e3a10f1c"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.017853 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" event={"ID":"9b79d203-f1c7-4523-9d97-51181cdb26d2","Type":"ContainerStarted","Data":"dd17f91a9e7bf2761e4b90fddb30f8edfdcf12c9b8105681db073bcfdf03e7ee"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.020985 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.022877 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q9bzk" event={"ID":"f864fdce-3b6b-4ba2-9159-12c2d21f2601","Type":"ContainerStarted","Data":"a7401cc9d5ec136d233b3818be7092e4551e67126c925ad9d5a73a7469eeba49"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.023013 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.025504 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" event={"ID":"2c1108f2-209c-4d4c-affc-fe8fbfd27cca","Type":"ContainerStarted","Data":"be240166e4ca3f513a63efcc02aeb296bb9fb2204003bd906232438ea6aa0a8a"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.028059 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" event={"ID":"27d783b3-6f7d-4f4d-b054-225bfcb98fd5","Type":"ContainerStarted","Data":"28a237d432d0c45ff9af8f1e618332c53ebefb605b5dbbb846fdce9c29d4ab4c"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.030250 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" event={"ID":"917a1c8b-59d5-4acb-8cef-91979326a7d1","Type":"ContainerStarted","Data":"4385ec25f9530507d880fa25979bb56c026c5e36ad48bc8a34a7213b4081acf6"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.030402 5108 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-h2slm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.030495 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" podUID="f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.032022 5108 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-z28zc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.032056 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" podUID="6c411323-7b32-4e2b-a2b9-c6b63abeb1ea" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.035394 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.035448 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.035589 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fmvtw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.035670 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.044103 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podStartSLOduration=9.044079151 podStartE2EDuration="9.044079151s" podCreationTimestamp="2026-02-02 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.041997136 +0000 UTC m=+127.317494066" watchObservedRunningTime="2026-02-02 00:12:08.044079151 +0000 UTC m=+127.319576081" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.044520 5108 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-cvtnf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.044706 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" podUID="2b96d2a0-be27-428e-8bfd-f78a09feb756" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.077307 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.081195 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.085818 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.585792255 +0000 UTC m=+127.861289345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.103162 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" podStartSLOduration=104.103143105 podStartE2EDuration="1m44.103143105s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.100607238 +0000 UTC m=+127.376104188" watchObservedRunningTime="2026-02-02 00:12:08.103143105 +0000 UTC m=+127.378640035" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.157874 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" podStartSLOduration=104.157844304 podStartE2EDuration="1m44.157844304s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.155979114 +0000 UTC m=+127.431476054" watchObservedRunningTime="2026-02-02 00:12:08.157844304 +0000 UTC m=+127.433341234" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.186501 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.187711 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.687684454 +0000 UTC m=+127.963181594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.188941 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" podStartSLOduration=104.188912666 podStartE2EDuration="1m44.188912666s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.186451581 +0000 UTC m=+127.461948541" watchObservedRunningTime="2026-02-02 00:12:08.188912666 +0000 UTC m=+127.464409596" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.255983 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" podStartSLOduration=104.255964832 podStartE2EDuration="1m44.255964832s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.225928356 +0000 UTC m=+127.501425296" watchObservedRunningTime="2026-02-02 00:12:08.255964832 +0000 UTC m=+127.531461762" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.284885 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-q9bzk" podStartSLOduration=9.284868357 podStartE2EDuration="9.284868357s" podCreationTimestamp="2026-02-02 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.282306359 +0000 UTC m=+127.557803309" watchObservedRunningTime="2026-02-02 00:12:08.284868357 +0000 UTC m=+127.560365287" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.294908 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.295334 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.795314624 +0000 UTC m=+128.070811554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.311166 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" podStartSLOduration=104.311131693 podStartE2EDuration="1m44.311131693s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.308554995 +0000 UTC m=+127.584051935" watchObservedRunningTime="2026-02-02 00:12:08.311131693 +0000 UTC m=+127.586628623" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.397097 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.397611 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.897593603 +0000 UTC m=+128.173090543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.424139 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46346: no serving certificate available for the kubelet" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.498861 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.499446 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.999427099 +0000 UTC m=+128.274924029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.601260 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.601715 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.101695337 +0000 UTC m=+128.377192267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.702603 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.702711 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.202688661 +0000 UTC m=+128.478185591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.703058 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.703372 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.20336465 +0000 UTC m=+128.478861580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.768186 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ng2x6"] Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.805395 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.805506 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.305483153 +0000 UTC m=+128.580980083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.805844 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.806218 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.306206942 +0000 UTC m=+128.581703882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.901820 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:08 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:08 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:08 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.901948 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.907355 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.907733 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.4077025 +0000 UTC m=+128.683199430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.008889 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.009263 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.509248919 +0000 UTC m=+128.784745849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.040324 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.040372 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.110000 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.110497 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.6104773 +0000 UTC m=+128.885974230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.212583 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.217760 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.71774555 +0000 UTC m=+128.993242480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.313590 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.314011 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.813991969 +0000 UTC m=+129.089488899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.415456 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.415872 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.915857176 +0000 UTC m=+129.191354096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.481401 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.483444 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.530985 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.531351 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.031330664 +0000 UTC m=+129.306827594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.632900 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.634097 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.134077655 +0000 UTC m=+129.409574585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.646043 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.734097 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.734286 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.234250088 +0000 UTC m=+129.509747008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.734788 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.735188 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.235168462 +0000 UTC m=+129.510665382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.751790 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.752357 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8285a46b-171e-4c8c-ba54-5ab062df76fc" containerName="collect-profiles" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.752373 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8285a46b-171e-4c8c-ba54-5ab062df76fc" containerName="collect-profiles" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.752478 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="8285a46b-171e-4c8c-ba54-5ab062df76fc" containerName="collect-profiles" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.778488 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.778660 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.784120 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.784319 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.835795 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8285a46b-171e-4c8c-ba54-5ab062df76fc-config-volume\") pod \"8285a46b-171e-4c8c-ba54-5ab062df76fc\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.836063 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.836114 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcnnp\" (UniqueName: \"kubernetes.io/projected/8285a46b-171e-4c8c-ba54-5ab062df76fc-kube-api-access-xcnnp\") pod \"8285a46b-171e-4c8c-ba54-5ab062df76fc\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.836266 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8285a46b-171e-4c8c-ba54-5ab062df76fc-secret-volume\") pod \"8285a46b-171e-4c8c-ba54-5ab062df76fc\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.836403 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.336365741 +0000 UTC m=+129.611862671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.836935 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.837183 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8285a46b-171e-4c8c-ba54-5ab062df76fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "8285a46b-171e-4c8c-ba54-5ab062df76fc" (UID: "8285a46b-171e-4c8c-ba54-5ab062df76fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.837508 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.337500572 +0000 UTC m=+129.612997502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.849973 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8285a46b-171e-4c8c-ba54-5ab062df76fc-kube-api-access-xcnnp" (OuterVolumeSpecName: "kube-api-access-xcnnp") pod "8285a46b-171e-4c8c-ba54-5ab062df76fc" (UID: "8285a46b-171e-4c8c-ba54-5ab062df76fc"). InnerVolumeSpecName "kube-api-access-xcnnp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.855712 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.855861 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8285a46b-171e-4c8c-ba54-5ab062df76fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8285a46b-171e-4c8c-ba54-5ab062df76fc" (UID: "8285a46b-171e-4c8c-ba54-5ab062df76fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.894686 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.894879 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.903509 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:09 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:09 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:09 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.903627 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.903697 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.903853 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.938639 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.938870 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.938943 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.939021 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xcnnp\" (UniqueName: \"kubernetes.io/projected/8285a46b-171e-4c8c-ba54-5ab062df76fc-kube-api-access-xcnnp\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.939035 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8285a46b-171e-4c8c-ba54-5ab062df76fc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.939044 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8285a46b-171e-4c8c-ba54-5ab062df76fc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.939128 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.439109162 +0000 UTC m=+129.714606092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.001995 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-52cvp"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.008618 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.012870 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.015257 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-52cvp"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.040757 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.040811 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.040936 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.041123 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.041161 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.041637 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.541621656 +0000 UTC m=+129.817118586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.041786 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.052978 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" gracePeriod=30 Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.053436 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.054508 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" event={"ID":"8285a46b-171e-4c8c-ba54-5ab062df76fc","Type":"ContainerDied","Data":"791e301889cececb220b16971e4a6f533193ec24be50cd2c08fffccb59186f0d"} Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.054550 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791e301889cececb220b16971e4a6f533193ec24be50cd2c08fffccb59186f0d" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.073287 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.102188 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.142662 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.143044 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wl9\" (UniqueName: \"kubernetes.io/projected/ef823528-7549-4a91-83c9-e5b243ecb37c-kube-api-access-p7wl9\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.143080 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.143162 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-utilities\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.143294 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.143436 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.143579 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.643557456 +0000 UTC m=+129.919054376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.143712 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.177344 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.186517 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8l8nm"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.218517 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.244506 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8l8nm"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.244870 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.244894 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.245448 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.245492 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-utilities\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.245687 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p7wl9\" (UniqueName: \"kubernetes.io/projected/ef823528-7549-4a91-83c9-e5b243ecb37c-kube-api-access-p7wl9\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.245734 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.247728 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.747518889 +0000 UTC m=+130.023015819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.248395 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.249366 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-utilities\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.250043 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.251795 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.321610 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7wl9\" (UniqueName: \"kubernetes.io/projected/ef823528-7549-4a91-83c9-e5b243ecb37c-kube-api-access-p7wl9\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.339736 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.347727 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.348043 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-catalog-content\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.348145 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-utilities\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.348279 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55fbs\" (UniqueName: \"kubernetes.io/projected/d1e2eec1-1c52-4e62-b697-b308e89e1377-kube-api-access-55fbs\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.349740 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.849709835 +0000 UTC m=+130.125206765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.403814 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9ss2j"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.451682 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-utilities\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.451780 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-55fbs\" (UniqueName: \"kubernetes.io/projected/d1e2eec1-1c52-4e62-b697-b308e89e1377-kube-api-access-55fbs\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.451871 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.451927 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-catalog-content\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.452558 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-catalog-content\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.452815 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-utilities\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.453521 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.953502603 +0000 UTC m=+130.228999533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.482348 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9ss2j"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.482575 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.488596 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-55fbs\" (UniqueName: \"kubernetes.io/projected/d1e2eec1-1c52-4e62-b697-b308e89e1377-kube-api-access-55fbs\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.493152 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 02 00:12:10 crc kubenswrapper[5108]: W0202 00:12:10.504799 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaf6bc5fe_38fb_4fd6_b9a9_57172b79a6ca.slice/crio-f41f092b89bf3ce8052d25ff9ab53c4f07a572354f7ce3d35adedaba04defb8c WatchSource:0}: Error finding container f41f092b89bf3ce8052d25ff9ab53c4f07a572354f7ce3d35adedaba04defb8c: Status 404 returned error can't find the container with id f41f092b89bf3ce8052d25ff9ab53c4f07a572354f7ce3d35adedaba04defb8c Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.553642 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.554572 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.054494447 +0000 UTC m=+130.329991377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.554810 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.555485 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.055473264 +0000 UTC m=+130.330970194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.599069 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.601291 5108 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-wbv6f container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]log ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]etcd ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/max-in-flight-filter ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 02 00:12:10 crc kubenswrapper[5108]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 02 00:12:10 crc kubenswrapper[5108]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/project.openshift.io-projectcache ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-startinformers ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 00:12:10 crc kubenswrapper[5108]: livez check failed Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.601333 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" podUID="8490096f-f230-4160-bb09-338c9fa9f7ca" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.641670 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jgmw6"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.655902 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.656326 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-catalog-content\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.656412 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmjzg\" (UniqueName: \"kubernetes.io/projected/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-kube-api-access-dmjzg\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.656441 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-utilities\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.656605 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.156575631 +0000 UTC m=+130.432072561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.758445 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-catalog-content\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.758524 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dmjzg\" (UniqueName: \"kubernetes.io/projected/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-kube-api-access-dmjzg\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.758541 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-utilities\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.758613 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.758919 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.25890516 +0000 UTC m=+130.534402090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.759474 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-catalog-content\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.759917 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-utilities\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.790520 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgmw6"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.790950 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.794440 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmjzg\" (UniqueName: \"kubernetes.io/projected/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-kube-api-access-dmjzg\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.796850 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-52cvp"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.806399 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: W0202 00:12:10.831339 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef823528_7549_4a91_83c9_e5b243ecb37c.slice/crio-f00eee2df222a89df8cd42cafd662c24a80cb3735fd8845f8256dd421fcd07cf WatchSource:0}: Error finding container f00eee2df222a89df8cd42cafd662c24a80cb3735fd8845f8256dd421fcd07cf: Status 404 returned error can't find the container with id f00eee2df222a89df8cd42cafd662c24a80cb3735fd8845f8256dd421fcd07cf Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.844507 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.860785 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.861109 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.361090677 +0000 UTC m=+130.636587607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.886683 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.934839 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:10 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:10 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:10 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.934904 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.962968 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwm9f\" (UniqueName: \"kubernetes.io/projected/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-kube-api-access-dwm9f\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.963043 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.963110 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-utilities\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.963144 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-catalog-content\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.963502 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.463488128 +0000 UTC m=+130.738985058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.986939 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.987054 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.999393 5108 patch_prober.go:28] interesting pod/console-64d44f6ddf-9pw49 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.999467 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-9pw49" podUID="6d992c02-f6cc-4488-9108-a72c6c2f3dcf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.019712 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.037905 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46360: no serving certificate available for the kubelet" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.065203 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.065600 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.565567261 +0000 UTC m=+130.841064191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.066443 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.066585 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-utilities\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.066636 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-catalog-content\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.066794 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dwm9f\" (UniqueName: \"kubernetes.io/projected/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-kube-api-access-dwm9f\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.067919 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.567898733 +0000 UTC m=+130.843395743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.068902 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-catalog-content\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.072626 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-utilities\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.122389 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwm9f\" (UniqueName: \"kubernetes.io/projected/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-kube-api-access-dwm9f\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.129713 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9ss2j"] Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.129989 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ecff25a2-faeb-4efb-9e50-b8981535bbb3","Type":"ContainerStarted","Data":"70144879ca1801ad320f413cacebe5723f4e76015c3286fd5327879285141829"} Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.132520 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.139105 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca","Type":"ContainerStarted","Data":"f41f092b89bf3ce8052d25ff9ab53c4f07a572354f7ce3d35adedaba04defb8c"} Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.168352 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.168686 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.668648751 +0000 UTC m=+130.944145681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.169092 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.169741 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.669733559 +0000 UTC m=+130.945230489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.187943 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52cvp" event={"ID":"ef823528-7549-4a91-83c9-e5b243ecb37c","Type":"ContainerStarted","Data":"f00eee2df222a89df8cd42cafd662c24a80cb3735fd8845f8256dd421fcd07cf"} Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.257620 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8l8nm"] Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.270362 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.271125 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.771104844 +0000 UTC m=+131.046601774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.372145 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.372687 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.872647963 +0000 UTC m=+131.148144893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.474021 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.474285 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.974241603 +0000 UTC m=+131.249738533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.474716 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.475050 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.975041474 +0000 UTC m=+131.250538404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.576341 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.576541 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.076509711 +0000 UTC m=+131.352006641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.665523 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgmw6"] Feb 02 00:12:11 crc kubenswrapper[5108]: W0202 00:12:11.678413 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41859985_fc1d_4d4e_bbe8_b0a99955ac0a.slice/crio-6f0c7fb95227a7df0062f6ca54786e7bc1b0d3aad99b375a28cf44d515d2f1be WatchSource:0}: Error finding container 6f0c7fb95227a7df0062f6ca54786e7bc1b0d3aad99b375a28cf44d515d2f1be: Status 404 returned error can't find the container with id 6f0c7fb95227a7df0062f6ca54786e7bc1b0d3aad99b375a28cf44d515d2f1be Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.679770 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.680191 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.180173736 +0000 UTC m=+131.455670666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.780647 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.780918 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.280873022 +0000 UTC m=+131.556369962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.781085 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.781597 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.281567932 +0000 UTC m=+131.557065112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.883958 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.884110 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.384084456 +0000 UTC m=+131.659581386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.884443 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.884842 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.384830435 +0000 UTC m=+131.660327365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.902474 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:11 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:11 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:11 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.902564 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.986194 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.986509 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.486454797 +0000 UTC m=+131.761951757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.987662 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.988079 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.488060179 +0000 UTC m=+131.763557289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.088493 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.088746 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.588707074 +0000 UTC m=+131.864204004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.089633 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.090327 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.590296577 +0000 UTC m=+131.865793507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.136921 5108 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef823528_7549_4a91_83c9_e5b243ecb37c.slice/crio-9b5a92a0aba545b8dbaeed6f9c1fc9550f60e0adaa5e10b74e9cc24a24cfad00.scope\": RecentStats: unable to find data in memory cache]" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.189108 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wzh6n"] Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.191899 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.192517 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.692493962 +0000 UTC m=+131.967990912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.228214 5108 generic.go:358] "Generic (PLEG): container finished" podID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerID="9b5a92a0aba545b8dbaeed6f9c1fc9550f60e0adaa5e10b74e9cc24a24cfad00" exitCode=0 Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.236085 5108 generic.go:358] "Generic (PLEG): container finished" podID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerID="dc6f982b2d56c1abb172d98e66aa0c15b24571bc47876df35d5985b98e039d3c" exitCode=0 Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272002 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerStarted","Data":"6f0c7fb95227a7df0062f6ca54786e7bc1b0d3aad99b375a28cf44d515d2f1be"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272049 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzh6n"] Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272066 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" event={"ID":"917a1c8b-59d5-4acb-8cef-91979326a7d1","Type":"ContainerStarted","Data":"affca2f46576140bfc2f7fa793d8be2e955c260a936863a0aaaa74ff13f67148"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272079 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ecff25a2-faeb-4efb-9e50-b8981535bbb3","Type":"ContainerStarted","Data":"2926e9efd55ee24f9bd84c1f1c357729c5787a1065057fec02eee0a89b6c7866"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272092 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca","Type":"ContainerStarted","Data":"f53470f0349cc6b8707af3c2bc15c0525494aead25f907bb884298efb59e0e9b"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272104 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52cvp" event={"ID":"ef823528-7549-4a91-83c9-e5b243ecb37c","Type":"ContainerDied","Data":"9b5a92a0aba545b8dbaeed6f9c1fc9550f60e0adaa5e10b74e9cc24a24cfad00"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272123 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerStarted","Data":"f04bb6768ab8660dd418d641eb48dd64d23f0bc1405200098b46dd1e736803c3"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272134 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerStarted","Data":"eb0a00b12767c4ff782045029b2e342458acfc4bf6b005b9598c899c329f4a88"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272147 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerDied","Data":"dc6f982b2d56c1abb172d98e66aa0c15b24571bc47876df35d5985b98e039d3c"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272160 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerStarted","Data":"bf1f4e8893cf7d38c33c0c17e67ab9bd9445bacbc6cedb29875eaf455b2ef485"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272298 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.276073 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.295034 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-catalog-content\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.295085 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmw2t\" (UniqueName: \"kubernetes.io/projected/c7a5230e-8980-4561-bfb3-015283fcbaa4-kube-api-access-lmw2t\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.295159 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-utilities\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.296057 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.296715 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.796688091 +0000 UTC m=+132.072185021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.299050 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=3.299029233 podStartE2EDuration="3.299029233s" podCreationTimestamp="2026-02-02 00:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:12.295268884 +0000 UTC m=+131.570765814" watchObservedRunningTime="2026-02-02 00:12:12.299029233 +0000 UTC m=+131.574526163" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.371796 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=3.37176781 podStartE2EDuration="3.37176781s" podCreationTimestamp="2026-02-02 00:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:12.370786464 +0000 UTC m=+131.646283404" watchObservedRunningTime="2026-02-02 00:12:12.37176781 +0000 UTC m=+131.647264740" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.398216 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.398381 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.898350224 +0000 UTC m=+132.173847144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.398722 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.398835 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-catalog-content\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.398861 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lmw2t\" (UniqueName: \"kubernetes.io/projected/c7a5230e-8980-4561-bfb3-015283fcbaa4-kube-api-access-lmw2t\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.398954 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-utilities\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.399293 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.899272628 +0000 UTC m=+132.174769558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.399934 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-catalog-content\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.400150 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-utilities\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.441956 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmw2t\" (UniqueName: \"kubernetes.io/projected/c7a5230e-8980-4561-bfb3-015283fcbaa4-kube-api-access-lmw2t\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.503213 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.503655 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:13.003620911 +0000 UTC m=+132.279117841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.504020 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.504424 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:13.004415853 +0000 UTC m=+132.279912783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.604832 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pv288"] Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.606017 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.606283 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:13.106240989 +0000 UTC m=+132.381737919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.606646 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.607268 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:13.107255075 +0000 UTC m=+132.382752005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.619420 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.645643 5108 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.664631 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv288"] Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.671413 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.705679 5108 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T00:12:12.645668783Z","UUID":"98550e70-daa2-4fdb-9e32-d2c134d8977f","Handler":null,"Name":"","Endpoint":""} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.708124 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.708273 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:13.20824529 +0000 UTC m=+132.483742220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.708513 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmr8d\" (UniqueName: \"kubernetes.io/projected/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-kube-api-access-rmr8d\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.708636 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.708714 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-catalog-content\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.708805 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-utilities\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.709324 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:13.209314888 +0000 UTC m=+132.484811819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.722503 5108 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.722767 5108 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.812097 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.813580 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-catalog-content\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.813741 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-utilities\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.813863 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmr8d\" (UniqueName: \"kubernetes.io/projected/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-kube-api-access-rmr8d\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.814131 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-catalog-content\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.814282 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-utilities\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.831033 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.845168 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmr8d\" (UniqueName: \"kubernetes.io/projected/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-kube-api-access-rmr8d\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.895855 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.903069 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:12 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:12 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:12 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.903179 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.918576 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.935791 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.935854 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.957440 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.032605 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.196797 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g4h5k"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.233017 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.241810 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.244726 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.261020 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4h5k"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.261357 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 02 00:12:13 crc kubenswrapper[5108]: W0202 00:12:13.292048 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7a5230e_8980_4561_bfb3_015283fcbaa4.slice/crio-ea9359a1525df7dedd3d0704fa36125a2831836999184f23e64643dd75e53b0e WatchSource:0}: Error finding container ea9359a1525df7dedd3d0704fa36125a2831836999184f23e64643dd75e53b0e: Status 404 returned error can't find the container with id ea9359a1525df7dedd3d0704fa36125a2831836999184f23e64643dd75e53b0e Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.297538 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzh6n"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.319406 5108 generic.go:358] "Generic (PLEG): container finished" podID="af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca" containerID="f53470f0349cc6b8707af3c2bc15c0525494aead25f907bb884298efb59e0e9b" exitCode=0 Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.319575 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca","Type":"ContainerDied","Data":"f53470f0349cc6b8707af3c2bc15c0525494aead25f907bb884298efb59e0e9b"} Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.325845 5108 generic.go:358] "Generic (PLEG): container finished" podID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerID="f04bb6768ab8660dd418d641eb48dd64d23f0bc1405200098b46dd1e736803c3" exitCode=0 Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.326103 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerDied","Data":"f04bb6768ab8660dd418d641eb48dd64d23f0bc1405200098b46dd1e736803c3"} Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.331671 5108 generic.go:358] "Generic (PLEG): container finished" podID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerID="b91c60dbd115b4b7905f65ba4aae50ffb73107e888d42e0249b2d0b2231508b8" exitCode=0 Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.331831 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerDied","Data":"b91c60dbd115b4b7905f65ba4aae50ffb73107e888d42e0249b2d0b2231508b8"} Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.332785 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-utilities\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.333036 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drd6d\" (UniqueName: \"kubernetes.io/projected/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-kube-api-access-drd6d\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.333167 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-catalog-content\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.340683 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" event={"ID":"917a1c8b-59d5-4acb-8cef-91979326a7d1","Type":"ContainerStarted","Data":"9d5276904f486560300929532b44d4b52eb74aa22d216eeb7926559631800e8b"} Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.352874 5108 generic.go:358] "Generic (PLEG): container finished" podID="ecff25a2-faeb-4efb-9e50-b8981535bbb3" containerID="2926e9efd55ee24f9bd84c1f1c357729c5787a1065057fec02eee0a89b6c7866" exitCode=0 Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.352942 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ecff25a2-faeb-4efb-9e50-b8981535bbb3","Type":"ContainerDied","Data":"2926e9efd55ee24f9bd84c1f1c357729c5787a1065057fec02eee0a89b6c7866"} Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.402716 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv288"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.436044 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-drd6d\" (UniqueName: \"kubernetes.io/projected/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-kube-api-access-drd6d\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.436149 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-catalog-content\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.436254 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-utilities\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.436696 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-utilities\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.437168 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-catalog-content\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.450317 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.450681 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.488605 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-drd6d\" (UniqueName: \"kubernetes.io/projected/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-kube-api-access-drd6d\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.597559 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.610362 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.615045 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pwwt9"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.655931 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwwt9"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.656059 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.742418 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-utilities\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.743111 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghx28\" (UniqueName: \"kubernetes.io/projected/dfe89a3e-59b8-4707-863b-ed23bea6f273-kube-api-access-ghx28\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.743298 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-catalog-content\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.790089 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mjr86"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.853548 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghx28\" (UniqueName: \"kubernetes.io/projected/dfe89a3e-59b8-4707-863b-ed23bea6f273-kube-api-access-ghx28\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.853744 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-catalog-content\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.853812 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-utilities\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.854431 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-utilities\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.855026 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-catalog-content\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.895900 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghx28\" (UniqueName: \"kubernetes.io/projected/dfe89a3e-59b8-4707-863b-ed23bea6f273-kube-api-access-ghx28\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.900487 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:13 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:13 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:13 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.900559 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.900495 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4h5k"] Feb 02 00:12:13 crc kubenswrapper[5108]: W0202 00:12:13.912736 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab8f756d_4492_4dfc_ae46_80bb93dd6d86.slice/crio-91f5baffdf47edb0dcf278405ff6c3e8bfcf6fb2a306cd416c02fa78eef020a8 WatchSource:0}: Error finding container 91f5baffdf47edb0dcf278405ff6c3e8bfcf6fb2a306cd416c02fa78eef020a8: Status 404 returned error can't find the container with id 91f5baffdf47edb0dcf278405ff6c3e8bfcf6fb2a306cd416c02fa78eef020a8 Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.961667 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.077337 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.366539 5108 generic.go:358] "Generic (PLEG): container finished" podID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerID="2e1ed35cecd83ec6e1cd535df757ea287981a6c7aebb8cec80b33fdbbc5c5139" exitCode=0 Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.366683 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzh6n" event={"ID":"c7a5230e-8980-4561-bfb3-015283fcbaa4","Type":"ContainerDied","Data":"2e1ed35cecd83ec6e1cd535df757ea287981a6c7aebb8cec80b33fdbbc5c5139"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.366724 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzh6n" event={"ID":"c7a5230e-8980-4561-bfb3-015283fcbaa4","Type":"ContainerStarted","Data":"ea9359a1525df7dedd3d0704fa36125a2831836999184f23e64643dd75e53b0e"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.375010 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" event={"ID":"917a1c8b-59d5-4acb-8cef-91979326a7d1","Type":"ContainerStarted","Data":"4e64f9652f0b240af997a0094d5833499b1a766a26c92b2aac629ab4f3330dfb"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.390639 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" event={"ID":"51ba194a-1171-4ed4-a843-0c39ac61d268","Type":"ContainerStarted","Data":"527145b28c45c3ea8eb6f6c44f7c51865dd5843b1597aa9cf927f7436a5c19fe"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.390717 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" event={"ID":"51ba194a-1171-4ed4-a843-0c39ac61d268","Type":"ContainerStarted","Data":"1447dcac9c96a7085eca20122133eb4f717b3af0915a27a86280d315ab8e69c0"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.391314 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.392957 5108 generic.go:358] "Generic (PLEG): container finished" podID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerID="cf5c6a2438aea906e6d82a2f7c0400d982272ffc4bbb055c232a1e2fffedf93d" exitCode=0 Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.393034 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv288" event={"ID":"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa","Type":"ContainerDied","Data":"cf5c6a2438aea906e6d82a2f7c0400d982272ffc4bbb055c232a1e2fffedf93d"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.393054 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv288" event={"ID":"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa","Type":"ContainerStarted","Data":"a1c222f8566d6eeedc3932944e3dca34068066d180f7b69bf128f26076481b1b"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.415170 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerStarted","Data":"c8b60dd30800821a50c8edf3cedf017fa85abf0860ba13bd51115ac055be3dc4"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.415274 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerStarted","Data":"91f5baffdf47edb0dcf278405ff6c3e8bfcf6fb2a306cd416c02fa78eef020a8"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.442082 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" podStartSLOduration=110.442062892 podStartE2EDuration="1m50.442062892s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:14.438123185 +0000 UTC m=+133.713620135" watchObservedRunningTime="2026-02-02 00:12:14.442062892 +0000 UTC m=+133.717559822" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.480845 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" podStartSLOduration=15.480812146 podStartE2EDuration="15.480812146s" podCreationTimestamp="2026-02-02 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:14.472197062 +0000 UTC m=+133.747694022" watchObservedRunningTime="2026-02-02 00:12:14.480812146 +0000 UTC m=+133.756309076" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.637503 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwwt9"] Feb 02 00:12:14 crc kubenswrapper[5108]: W0202 00:12:14.651832 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfe89a3e_59b8_4707_863b_ed23bea6f273.slice/crio-1d76080a17da74a3f5f557cd80381d1dd1a2baeca402f2c1f50f111d9dcbf48c WatchSource:0}: Error finding container 1d76080a17da74a3f5f557cd80381d1dd1a2baeca402f2c1f50f111d9dcbf48c: Status 404 returned error can't find the container with id 1d76080a17da74a3f5f557cd80381d1dd1a2baeca402f2c1f50f111d9dcbf48c Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.729969 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.772530 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kube-api-access\") pod \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.772692 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kubelet-dir\") pod \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.773294 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ecff25a2-faeb-4efb-9e50-b8981535bbb3" (UID: "ecff25a2-faeb-4efb-9e50-b8981535bbb3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.781479 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ecff25a2-faeb-4efb-9e50-b8981535bbb3" (UID: "ecff25a2-faeb-4efb-9e50-b8981535bbb3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.803562 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.873998 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kubelet-dir\") pod \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.874108 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kube-api-access\") pod \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.874107 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca" (UID: "af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.874550 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.874573 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.874585 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.881605 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca" (UID: "af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.904626 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:14 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:14 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:14 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.904742 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.976384 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.222026 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.227807 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.428502 5108 generic.go:358] "Generic (PLEG): container finished" podID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerID="c8b60dd30800821a50c8edf3cedf017fa85abf0860ba13bd51115ac055be3dc4" exitCode=0 Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.428708 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerDied","Data":"c8b60dd30800821a50c8edf3cedf017fa85abf0860ba13bd51115ac055be3dc4"} Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.431023 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ecff25a2-faeb-4efb-9e50-b8981535bbb3","Type":"ContainerDied","Data":"70144879ca1801ad320f413cacebe5723f4e76015c3286fd5327879285141829"} Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.431053 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70144879ca1801ad320f413cacebe5723f4e76015c3286fd5327879285141829" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.431133 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.438334 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca","Type":"ContainerDied","Data":"f41f092b89bf3ce8052d25ff9ab53c4f07a572354f7ce3d35adedaba04defb8c"} Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.438380 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f41f092b89bf3ce8052d25ff9ab53c4f07a572354f7ce3d35adedaba04defb8c" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.438464 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.447644 5108 generic.go:358] "Generic (PLEG): container finished" podID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerID="0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159" exitCode=0 Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.447701 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerDied","Data":"0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159"} Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.448016 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerStarted","Data":"1d76080a17da74a3f5f557cd80381d1dd1a2baeca402f2c1f50f111d9dcbf48c"} Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.714797 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.898846 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:15 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:15 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:15 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.898967 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:16 crc kubenswrapper[5108]: I0202 00:12:16.187590 5108 ???:1] "http: TLS handshake error from 192.168.126.11:40198: no serving certificate available for the kubelet" Feb 02 00:12:16 crc kubenswrapper[5108]: I0202 00:12:16.898546 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:16 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:16 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:16 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:16 crc kubenswrapper[5108]: I0202 00:12:16.899083 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:17 crc kubenswrapper[5108]: I0202 00:12:17.899927 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:17 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:17 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:17 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:17 crc kubenswrapper[5108]: I0202 00:12:17.900007 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:17 crc kubenswrapper[5108]: E0202 00:12:17.989731 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:17 crc kubenswrapper[5108]: E0202 00:12:17.991642 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:17 crc kubenswrapper[5108]: E0202 00:12:17.995661 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:17 crc kubenswrapper[5108]: E0202 00:12:17.995728 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 02 00:12:18 crc kubenswrapper[5108]: I0202 00:12:18.037906 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:18 crc kubenswrapper[5108]: I0202 00:12:18.044013 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:18 crc kubenswrapper[5108]: I0202 00:12:18.046187 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:18 crc kubenswrapper[5108]: I0202 00:12:18.900941 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:18 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:18 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:18 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:18 crc kubenswrapper[5108]: I0202 00:12:18.901043 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:19 crc kubenswrapper[5108]: I0202 00:12:19.038011 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:19 crc kubenswrapper[5108]: I0202 00:12:19.038097 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:19 crc kubenswrapper[5108]: I0202 00:12:19.897293 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:19 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:19 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:19 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:19 crc kubenswrapper[5108]: I0202 00:12:19.897612 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:20 crc kubenswrapper[5108]: I0202 00:12:20.897809 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:20 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:20 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:20 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:20 crc kubenswrapper[5108]: I0202 00:12:20.897893 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:20 crc kubenswrapper[5108]: I0202 00:12:20.987052 5108 patch_prober.go:28] interesting pod/console-64d44f6ddf-9pw49 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 02 00:12:20 crc kubenswrapper[5108]: I0202 00:12:20.987141 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-9pw49" podUID="6d992c02-f6cc-4488-9108-a72c6c2f3dcf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 02 00:12:21 crc kubenswrapper[5108]: I0202 00:12:21.897748 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:21 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:21 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:21 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:21 crc kubenswrapper[5108]: I0202 00:12:21.898520 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:22 crc kubenswrapper[5108]: I0202 00:12:22.896696 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:22 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:22 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:22 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:22 crc kubenswrapper[5108]: I0202 00:12:22.897308 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:23 crc kubenswrapper[5108]: I0202 00:12:23.450519 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:23 crc kubenswrapper[5108]: I0202 00:12:23.450625 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:23 crc kubenswrapper[5108]: I0202 00:12:23.897549 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:23 crc kubenswrapper[5108]: I0202 00:12:23.901755 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.285030 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.291790 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.386216 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.386327 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.386467 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.393146 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.398912 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.435028 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.456747 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.472936 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.487767 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.491333 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.495255 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.770000 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:12:25 crc kubenswrapper[5108]: I0202 00:12:25.616619 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:12:26 crc kubenswrapper[5108]: I0202 00:12:26.461852 5108 ???:1] "http: TLS handshake error from 192.168.126.11:41080: no serving certificate available for the kubelet" Feb 02 00:12:27 crc kubenswrapper[5108]: E0202 00:12:27.987796 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:27 crc kubenswrapper[5108]: E0202 00:12:27.990439 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:27 crc kubenswrapper[5108]: E0202 00:12:27.991837 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:27 crc kubenswrapper[5108]: E0202 00:12:27.992259 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 02 00:12:29 crc kubenswrapper[5108]: I0202 00:12:29.038283 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:29 crc kubenswrapper[5108]: I0202 00:12:29.038377 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:30 crc kubenswrapper[5108]: I0202 00:12:30.993381 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:31 crc kubenswrapper[5108]: I0202 00:12:31.000571 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.451186 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.451286 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.451340 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.452075 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.452284 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.452624 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"eeda0735367749aa2e538d9f6b415570b629014d0b7c343ab8f25cae42b998ed"} pod="openshift-console/downloads-747b44746d-cp5z2" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.452701 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" containerID="cri-o://eeda0735367749aa2e538d9f6b415570b629014d0b7c343ab8f25cae42b998ed" gracePeriod=2 Feb 02 00:12:34 crc kubenswrapper[5108]: I0202 00:12:34.601833 5108 generic.go:358] "Generic (PLEG): container finished" podID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerID="eeda0735367749aa2e538d9f6b415570b629014d0b7c343ab8f25cae42b998ed" exitCode=0 Feb 02 00:12:34 crc kubenswrapper[5108]: I0202 00:12:34.602099 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-cp5z2" event={"ID":"07d89198-8b8e-4edc-96b8-05b6df5194f6","Type":"ContainerDied","Data":"eeda0735367749aa2e538d9f6b415570b629014d0b7c343ab8f25cae42b998ed"} Feb 02 00:12:36 crc kubenswrapper[5108]: I0202 00:12:36.465156 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:37 crc kubenswrapper[5108]: E0202 00:12:37.989184 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:37 crc kubenswrapper[5108]: E0202 00:12:37.991524 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:37 crc kubenswrapper[5108]: E0202 00:12:37.993335 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:37 crc kubenswrapper[5108]: E0202 00:12:37.993451 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 02 00:12:39 crc kubenswrapper[5108]: I0202 00:12:39.047165 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:40 crc kubenswrapper[5108]: I0202 00:12:40.647597 5108 generic.go:358] "Generic (PLEG): container finished" podID="dcbaa597-5b18-4219-b757-5f10e86a2c1c" containerID="662689ee61fccec648a90a4375a519042cf1cb9c27ef807a261aa5cd1d207f99" exitCode=0 Feb 02 00:12:40 crc kubenswrapper[5108]: I0202 00:12:40.647725 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29499840-njc6g" event={"ID":"dcbaa597-5b18-4219-b757-5f10e86a2c1c","Type":"ContainerDied","Data":"662689ee61fccec648a90a4375a519042cf1cb9c27ef807a261aa5cd1d207f99"} Feb 02 00:12:43 crc kubenswrapper[5108]: I0202 00:12:43.452403 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:43 crc kubenswrapper[5108]: I0202 00:12:43.452787 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.762520 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.764880 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca" containerName="pruner" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.764916 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca" containerName="pruner" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.764939 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ecff25a2-faeb-4efb-9e50-b8981535bbb3" containerName="pruner" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.764952 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecff25a2-faeb-4efb-9e50-b8981535bbb3" containerName="pruner" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.765181 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ecff25a2-faeb-4efb-9e50-b8981535bbb3" containerName="pruner" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.765201 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca" containerName="pruner" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.971951 5108 ???:1] "http: TLS handshake error from 192.168.126.11:40958: no serving certificate available for the kubelet" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.674940 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.679583 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.679587 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.683938 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.790453 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.790505 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.892428 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.892481 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.892629 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.914276 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: E0202 00:12:47.987786 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:47 crc kubenswrapper[5108]: E0202 00:12:47.989841 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:47 crc kubenswrapper[5108]: E0202 00:12:47.991909 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:47 crc kubenswrapper[5108]: E0202 00:12:47.991980 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.995329 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:52 crc kubenswrapper[5108]: I0202 00:12:52.376533 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 02 00:12:53 crc kubenswrapper[5108]: I0202 00:12:53.458993 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:53 crc kubenswrapper[5108]: I0202 00:12:53.459069 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.198892 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.200676 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.305205 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-var-lock\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.305298 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa9da1f-16dc-411f-8968-783a0e3d1efd-kube-api-access\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.305362 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-kubelet-dir\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.406850 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-var-lock\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.406958 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa9da1f-16dc-411f-8968-783a0e3d1efd-kube-api-access\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.407017 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-kubelet-dir\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.407172 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-kubelet-dir\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.407260 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-var-lock\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.435535 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa9da1f-16dc-411f-8968-783a0e3d1efd-kube-api-access\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.520865 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.927702 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.015424 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l8sn\" (UniqueName: \"kubernetes.io/projected/dcbaa597-5b18-4219-b757-5f10e86a2c1c-kube-api-access-2l8sn\") pod \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.017758 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcbaa597-5b18-4219-b757-5f10e86a2c1c-serviceca\") pod \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.018678 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcbaa597-5b18-4219-b757-5f10e86a2c1c-serviceca" (OuterVolumeSpecName: "serviceca") pod "dcbaa597-5b18-4219-b757-5f10e86a2c1c" (UID: "dcbaa597-5b18-4219-b757-5f10e86a2c1c"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.023856 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbaa597-5b18-4219-b757-5f10e86a2c1c-kube-api-access-2l8sn" (OuterVolumeSpecName: "kube-api-access-2l8sn") pod "dcbaa597-5b18-4219-b757-5f10e86a2c1c" (UID: "dcbaa597-5b18-4219-b757-5f10e86a2c1c"). InnerVolumeSpecName "kube-api-access-2l8sn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.119439 5108 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcbaa597-5b18-4219-b757-5f10e86a2c1c-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.119498 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2l8sn\" (UniqueName: \"kubernetes.io/projected/dcbaa597-5b18-4219-b757-5f10e86a2c1c-kube-api-access-2l8sn\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.756731 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29499840-njc6g" event={"ID":"dcbaa597-5b18-4219-b757-5f10e86a2c1c","Type":"ContainerDied","Data":"ab1dda4ca19e44a7d7547556112d79c7a9164fc1db4386291660d7d4020c24e9"} Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.757197 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab1dda4ca19e44a7d7547556112d79c7a9164fc1db4386291660d7d4020c24e9" Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.757081 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.137271 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-26ppl"] Feb 02 00:12:56 crc kubenswrapper[5108]: W0202 00:12:56.196373 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf77c18f0_131e_482e_8e09_602b39b0c163.slice/crio-3ac04311d7163033509bae8a3218d2eb5fcc9f8518f664ef5b0e18f864193e32 WatchSource:0}: Error finding container 3ac04311d7163033509bae8a3218d2eb5fcc9f8518f664ef5b0e18f864193e32: Status 404 returned error can't find the container with id 3ac04311d7163033509bae8a3218d2eb5fcc9f8518f664ef5b0e18f864193e32 Feb 02 00:12:56 crc kubenswrapper[5108]: W0202 00:12:56.268023 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-dfd3151b97b0e177c54c648b665c74ea1174a7aeb7ce6fb98c3c71b656998985 WatchSource:0}: Error finding container dfd3151b97b0e177c54c648b665c74ea1174a7aeb7ce6fb98c3c71b656998985: Status 404 returned error can't find the container with id dfd3151b97b0e177c54c648b665c74ea1174a7aeb7ce6fb98c3c71b656998985 Feb 02 00:12:56 crc kubenswrapper[5108]: W0202 00:12:56.275440 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-8f9d7ec5a879486c86949396dc60b009f59c36025832daad1cb00b445f4a7cfb WatchSource:0}: Error finding container 8f9d7ec5a879486c86949396dc60b009f59c36025832daad1cb00b445f4a7cfb: Status 404 returned error can't find the container with id 8f9d7ec5a879486c86949396dc60b009f59c36025832daad1cb00b445f4a7cfb Feb 02 00:12:56 crc kubenswrapper[5108]: W0202 00:12:56.317899 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-dc6ded92c7d6ae957301b4c12b45c5dcfbfef9d21156d6ec0c1089ca18e41a3d WatchSource:0}: Error finding container dc6ded92c7d6ae957301b4c12b45c5dcfbfef9d21156d6ec0c1089ca18e41a3d: Status 404 returned error can't find the container with id dc6ded92c7d6ae957301b4c12b45c5dcfbfef9d21156d6ec0c1089ca18e41a3d Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.460626 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.495101 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.771693 5108 generic.go:358] "Generic (PLEG): container finished" podID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerID="9a151e0c7d30d225dcdec2ca4f289d179587e1b95d1e6242438eb1c220d1f684" exitCode=0 Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.771827 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzh6n" event={"ID":"c7a5230e-8980-4561-bfb3-015283fcbaa4","Type":"ContainerDied","Data":"9a151e0c7d30d225dcdec2ca4f289d179587e1b95d1e6242438eb1c220d1f684"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.790192 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerStarted","Data":"577ed71913c5b73811c39461c442deeaa9df5e912b98fd354ac4ff80e8d37c9d"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.807569 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0","Type":"ContainerStarted","Data":"023fb9b38bbdab192bf28e7e40fd7ee26699120e07f3c8523c03dd10c67cacbc"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.809790 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"dfd3151b97b0e177c54c648b665c74ea1174a7aeb7ce6fb98c3c71b656998985"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.811759 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"8f9d7ec5a879486c86949396dc60b009f59c36025832daad1cb00b445f4a7cfb"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.829215 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerStarted","Data":"bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.838850 5108 generic.go:358] "Generic (PLEG): container finished" podID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerID="04829b5f755d429edab97e4438b063d5bde6a76582a91c95f9ffc7a26e491127" exitCode=0 Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.839246 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv288" event={"ID":"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa","Type":"ContainerDied","Data":"04829b5f755d429edab97e4438b063d5bde6a76582a91c95f9ffc7a26e491127"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.848824 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerStarted","Data":"5d731cd91d7fa626117bbc5d945723e255f66a42540c3ed2667dd196c604f711"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.860634 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-26ppl" event={"ID":"f77c18f0-131e-482e-8e09-602b39b0c163","Type":"ContainerStarted","Data":"3ac04311d7163033509bae8a3218d2eb5fcc9f8518f664ef5b0e18f864193e32"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.865051 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-cp5z2" event={"ID":"07d89198-8b8e-4edc-96b8-05b6df5194f6","Type":"ContainerStarted","Data":"2af24917791832666af442ed7eb6d64dd5c5d3f93ac4c8f51096e3bbf48aaf59"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.868532 5108 generic.go:358] "Generic (PLEG): container finished" podID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerID="e6aef248a8876a5e2dc03274ba4ae95994c688af754968e8c9c65f4a76f03504" exitCode=0 Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.868594 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52cvp" event={"ID":"ef823528-7549-4a91-83c9-e5b243ecb37c","Type":"ContainerDied","Data":"e6aef248a8876a5e2dc03274ba4ae95994c688af754968e8c9c65f4a76f03504"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.877996 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerStarted","Data":"f739b14449c93c7de2447b64c031f8bff42355230b104d5359e8914ee83f1bb1"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.924361 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerStarted","Data":"dbd274483dff3718d495129bfcddb0bed6e580e217c4193576318ad2011f04ba"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.933969 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"dc6ded92c7d6ae957301b4c12b45c5dcfbfef9d21156d6ec0c1089ca18e41a3d"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.947427 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"baa9da1f-16dc-411f-8968-783a0e3d1efd","Type":"ContainerStarted","Data":"963c03dd266c5096ab10583ebcc3deeb02b48308e6dbedbd6e48c0e23e5a63d6"} Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.593038 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.593598 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.593661 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.983058 5108 generic.go:358] "Generic (PLEG): container finished" podID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerID="dbd274483dff3718d495129bfcddb0bed6e580e217c4193576318ad2011f04ba" exitCode=0 Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.983160 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerDied","Data":"dbd274483dff3718d495129bfcddb0bed6e580e217c4193576318ad2011f04ba"} Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.985349 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"623ff5d876fd59264a0c09ab3b74d07e5e3e1e4ad9feb42b39e38f0278a89d40"} Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.985502 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:12:57 crc kubenswrapper[5108]: E0202 00:12:57.988606 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4 is running failed: container process not found" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:57 crc kubenswrapper[5108]: E0202 00:12:57.989532 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4 is running failed: container process not found" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:57 crc kubenswrapper[5108]: E0202 00:12:57.989814 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4 is running failed: container process not found" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:57 crc kubenswrapper[5108]: E0202 00:12:57.989854 5108 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.991174 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"baa9da1f-16dc-411f-8968-783a0e3d1efd","Type":"ContainerStarted","Data":"491b9dc33be340ea8ece574e78c47522d583627c53b52c926c6593004894e871"} Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.996275 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzh6n" event={"ID":"c7a5230e-8980-4561-bfb3-015283fcbaa4","Type":"ContainerStarted","Data":"7027daeb8294c638005dbc109971ebb173c299ff05d37653d85c7855028e63bd"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:57.999970 5108 generic.go:358] "Generic (PLEG): container finished" podID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerID="577ed71913c5b73811c39461c442deeaa9df5e912b98fd354ac4ff80e8d37c9d" exitCode=0 Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.000098 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerDied","Data":"577ed71913c5b73811c39461c442deeaa9df5e912b98fd354ac4ff80e8d37c9d"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.001898 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0","Type":"ContainerStarted","Data":"4625d2b7c738f7c93691f6690d5bf737225154026be3eb28dcad721028323978"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.004714 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"215e28801ee330962c407d77f1324c3625654baa1f13e0944ef2939325bbcbfe"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.011918 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"78c922aa5d47d22d17e1e520318325fce6565e814630f7dd12b068d3f91b5458"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.016994 5108 generic.go:358] "Generic (PLEG): container finished" podID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerID="bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a" exitCode=0 Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.017095 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerDied","Data":"bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.022674 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-ng2x6_ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b/kube-multus-additional-cni-plugins/0.log" Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.022752 5108 generic.go:358] "Generic (PLEG): container finished" podID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" exitCode=137 Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.022873 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" event={"ID":"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b","Type":"ContainerDied","Data":"fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.030009 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wzh6n" podStartSLOduration=4.617476174 podStartE2EDuration="46.029987233s" podCreationTimestamp="2026-02-02 00:12:12 +0000 UTC" firstStartedPulling="2026-02-02 00:12:14.367824532 +0000 UTC m=+133.643321462" lastFinishedPulling="2026-02-02 00:12:55.780335591 +0000 UTC m=+175.055832521" observedRunningTime="2026-02-02 00:12:58.029341594 +0000 UTC m=+177.304838544" watchObservedRunningTime="2026-02-02 00:12:58.029987233 +0000 UTC m=+177.305484153" Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.030088 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv288" event={"ID":"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa","Type":"ContainerStarted","Data":"f69389c32201712636c553d4608b07ef227f9bb8555914fc6850f406b4363fe6"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.037852 5108 generic.go:358] "Generic (PLEG): container finished" podID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerID="5d731cd91d7fa626117bbc5d945723e255f66a42540c3ed2667dd196c604f711" exitCode=0 Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.037949 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerDied","Data":"5d731cd91d7fa626117bbc5d945723e255f66a42540c3ed2667dd196c604f711"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.042340 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-26ppl" event={"ID":"f77c18f0-131e-482e-8e09-602b39b0c163","Type":"ContainerStarted","Data":"db09f2b79f118c53d87217f9d083d12994294c3db45efe4ee167dce6c7a0257f"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.047451 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=6.047431787 podStartE2EDuration="6.047431787s" podCreationTimestamp="2026-02-02 00:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:58.043445908 +0000 UTC m=+177.318942858" watchObservedRunningTime="2026-02-02 00:12:58.047431787 +0000 UTC m=+177.322928717" Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.054008 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52cvp" event={"ID":"ef823528-7549-4a91-83c9-e5b243ecb37c","Type":"ContainerStarted","Data":"44c29c35f3f042606025783238fe84449fa274df709647a8bb2c6f5b25f6ea6a"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.060355 5108 generic.go:358] "Generic (PLEG): container finished" podID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerID="f739b14449c93c7de2447b64c031f8bff42355230b104d5359e8914ee83f1bb1" exitCode=0 Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.060447 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerDied","Data":"f739b14449c93c7de2447b64c031f8bff42355230b104d5359e8914ee83f1bb1"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.062516 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=12.062498167 podStartE2EDuration="12.062498167s" podCreationTimestamp="2026-02-02 00:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:58.058171839 +0000 UTC m=+177.333668789" watchObservedRunningTime="2026-02-02 00:12:58.062498167 +0000 UTC m=+177.337995087" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.156888 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-52cvp" podStartSLOduration=6.734009918 podStartE2EDuration="50.156865859s" podCreationTimestamp="2026-02-02 00:12:09 +0000 UTC" firstStartedPulling="2026-02-02 00:12:12.276819265 +0000 UTC m=+131.552316195" lastFinishedPulling="2026-02-02 00:12:55.699675216 +0000 UTC m=+174.975172136" observedRunningTime="2026-02-02 00:12:59.153806235 +0000 UTC m=+178.429303195" watchObservedRunningTime="2026-02-02 00:12:59.156865859 +0000 UTC m=+178.432362799" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.523557 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-ng2x6_ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b/kube-multus-additional-cni-plugins/0.log" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.523643 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.596011 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-ready\") pod \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.596204 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-tuning-conf-dir\") pod \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.596257 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xl46\" (UniqueName: \"kubernetes.io/projected/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-kube-api-access-2xl46\") pod \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.596316 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-cni-sysctl-allowlist\") pod \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.596391 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" (UID: "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.596647 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-ready" (OuterVolumeSpecName: "ready") pod "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" (UID: "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.597266 5108 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-ready\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.597295 5108 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.597366 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" (UID: "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.611372 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-kube-api-access-2xl46" (OuterVolumeSpecName: "kube-api-access-2xl46") pod "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" (UID: "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b"). InnerVolumeSpecName "kube-api-access-2xl46". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.699472 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2xl46\" (UniqueName: \"kubernetes.io/projected/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-kube-api-access-2xl46\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.699552 5108 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.082094 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerStarted","Data":"a863dcfbfd0957bb6d04ba9b952871d33c859aed1552b5491529a2c3d101a795"} Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.087160 5108 generic.go:358] "Generic (PLEG): container finished" podID="fa0c4e3b-102b-4208-9aea-f2c48cf52ac0" containerID="4625d2b7c738f7c93691f6690d5bf737225154026be3eb28dcad721028323978" exitCode=0 Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.087415 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0","Type":"ContainerDied","Data":"4625d2b7c738f7c93691f6690d5bf737225154026be3eb28dcad721028323978"} Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.090129 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-ng2x6_ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b/kube-multus-additional-cni-plugins/0.log" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.090406 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" event={"ID":"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b","Type":"ContainerDied","Data":"b7ccd63409a2599caa2a1d6a430c1e67af5f138dd3ea1e54d57df99b1d6cd73a"} Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.090574 5108 scope.go:117] "RemoveContainer" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.090466 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.094573 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-26ppl" event={"ID":"f77c18f0-131e-482e-8e09-602b39b0c163","Type":"ContainerStarted","Data":"f5df6cd7478c7ba7f695fd1ad9afb726bfa5ba738bd0890317ffd54325afc4f1"} Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.150373 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ng2x6"] Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.154365 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ng2x6"] Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.204986 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.205163 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.267513 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pv288" podStartSLOduration=6.700227711 podStartE2EDuration="48.267490742s" podCreationTimestamp="2026-02-02 00:12:12 +0000 UTC" firstStartedPulling="2026-02-02 00:12:14.393610333 +0000 UTC m=+133.669107263" lastFinishedPulling="2026-02-02 00:12:55.960873364 +0000 UTC m=+175.236370294" observedRunningTime="2026-02-02 00:13:00.265577841 +0000 UTC m=+179.541074821" watchObservedRunningTime="2026-02-02 00:13:00.267490742 +0000 UTC m=+179.542987692" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.864699 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.865118 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.106790 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerStarted","Data":"3f0b7cceb8942beae974160beea654ece1ffcbdf5f51cb46e2bcafac40dd76f7"} Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.109855 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerStarted","Data":"491616bba6f580cdfcad1db207711f26c90dd6c13b2aeba8831681ffd74d9b1d"} Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.133620 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.134016 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.136495 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jgmw6" podStartSLOduration=8.688598725 podStartE2EDuration="51.136476741s" podCreationTimestamp="2026-02-02 00:12:10 +0000 UTC" firstStartedPulling="2026-02-02 00:12:13.332449475 +0000 UTC m=+132.607946395" lastFinishedPulling="2026-02-02 00:12:55.780327481 +0000 UTC m=+175.055824411" observedRunningTime="2026-02-02 00:13:01.133708386 +0000 UTC m=+180.409205326" watchObservedRunningTime="2026-02-02 00:13:01.136476741 +0000 UTC m=+180.411973671" Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.166624 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9ss2j" podStartSLOduration=7.478698147 podStartE2EDuration="51.16659413s" podCreationTimestamp="2026-02-02 00:12:10 +0000 UTC" firstStartedPulling="2026-02-02 00:12:12.276762794 +0000 UTC m=+131.552259724" lastFinishedPulling="2026-02-02 00:12:55.964658767 +0000 UTC m=+175.240155707" observedRunningTime="2026-02-02 00:13:01.165618294 +0000 UTC m=+180.441115224" watchObservedRunningTime="2026-02-02 00:13:01.16659413 +0000 UTC m=+180.442091050" Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.190848 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-26ppl" podStartSLOduration=157.190806069 podStartE2EDuration="2m37.190806069s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:13:01.186110311 +0000 UTC m=+180.461607251" watchObservedRunningTime="2026-02-02 00:13:01.190806069 +0000 UTC m=+180.466302999" Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.567824 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" path="/var/lib/kubelet/pods/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b/volumes" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.053281 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-52cvp" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="registry-server" probeResult="failure" output=< Feb 02 00:13:02 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Feb 02 00:13:02 crc kubenswrapper[5108]: > Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.122442 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerStarted","Data":"0df55c9f0ebaec40aacdfbba7ebb6e0073cb9d22b3cdc2120d6cd95d09159f3c"} Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.147943 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g4h5k" podStartSLOduration=7.791521844 podStartE2EDuration="49.147919246s" podCreationTimestamp="2026-02-02 00:12:13 +0000 UTC" firstStartedPulling="2026-02-02 00:12:14.42765397 +0000 UTC m=+133.703150900" lastFinishedPulling="2026-02-02 00:12:55.784051372 +0000 UTC m=+175.059548302" observedRunningTime="2026-02-02 00:13:02.144198714 +0000 UTC m=+181.419695654" watchObservedRunningTime="2026-02-02 00:13:02.147919246 +0000 UTC m=+181.423416176" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.317308 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jgmw6" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="registry-server" probeResult="failure" output=< Feb 02 00:13:02 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Feb 02 00:13:02 crc kubenswrapper[5108]: > Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.503462 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.647668 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kube-api-access\") pod \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.647926 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kubelet-dir\") pod \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.648092 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fa0c4e3b-102b-4208-9aea-f2c48cf52ac0" (UID: "fa0c4e3b-102b-4208-9aea-f2c48cf52ac0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.648332 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.655547 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fa0c4e3b-102b-4208-9aea-f2c48cf52ac0" (UID: "fa0c4e3b-102b-4208-9aea-f2c48cf52ac0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.672497 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.672600 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.735424 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.749402 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.958270 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.958359 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.012556 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.131100 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerStarted","Data":"4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc"} Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.133255 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0","Type":"ContainerDied","Data":"023fb9b38bbdab192bf28e7e40fd7ee26699120e07f3c8523c03dd10c67cacbc"} Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.133431 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="023fb9b38bbdab192bf28e7e40fd7ee26699120e07f3c8523c03dd10c67cacbc" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.133357 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.161246 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pwwt9" podStartSLOduration=9.616507328 podStartE2EDuration="50.161204231s" podCreationTimestamp="2026-02-02 00:12:13 +0000 UTC" firstStartedPulling="2026-02-02 00:12:15.449634322 +0000 UTC m=+134.725131252" lastFinishedPulling="2026-02-02 00:12:55.994331225 +0000 UTC m=+175.269828155" observedRunningTime="2026-02-02 00:13:03.157182492 +0000 UTC m=+182.432679432" watchObservedRunningTime="2026-02-02 00:13:03.161204231 +0000 UTC m=+182.436701171" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.181187 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8l8nm" podStartSLOduration=10.551330186 podStartE2EDuration="53.181166984s" podCreationTimestamp="2026-02-02 00:12:10 +0000 UTC" firstStartedPulling="2026-02-02 00:12:13.330332427 +0000 UTC m=+132.605829357" lastFinishedPulling="2026-02-02 00:12:55.960169215 +0000 UTC m=+175.235666155" observedRunningTime="2026-02-02 00:13:03.178056149 +0000 UTC m=+182.453553099" watchObservedRunningTime="2026-02-02 00:13:03.181166984 +0000 UTC m=+182.456663914" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.191578 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.193000 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.450323 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.450429 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.612200 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.612730 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:13:04 crc kubenswrapper[5108]: I0202 00:13:04.077907 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:13:04 crc kubenswrapper[5108]: I0202 00:13:04.077961 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:13:04 crc kubenswrapper[5108]: I0202 00:13:04.674194 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4h5k" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="registry-server" probeResult="failure" output=< Feb 02 00:13:04 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Feb 02 00:13:04 crc kubenswrapper[5108]: > Feb 02 00:13:05 crc kubenswrapper[5108]: I0202 00:13:05.123396 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pwwt9" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="registry-server" probeResult="failure" output=< Feb 02 00:13:05 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Feb 02 00:13:05 crc kubenswrapper[5108]: > Feb 02 00:13:07 crc kubenswrapper[5108]: I0202 00:13:07.204368 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv288"] Feb 02 00:13:07 crc kubenswrapper[5108]: I0202 00:13:07.204989 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pv288" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="registry-server" containerID="cri-o://f69389c32201712636c553d4608b07ef227f9bb8555914fc6850f406b4363fe6" gracePeriod=2 Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.183485 5108 generic.go:358] "Generic (PLEG): container finished" podID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerID="f69389c32201712636c553d4608b07ef227f9bb8555914fc6850f406b4363fe6" exitCode=0 Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.183630 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv288" event={"ID":"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa","Type":"ContainerDied","Data":"f69389c32201712636c553d4608b07ef227f9bb8555914fc6850f406b4363fe6"} Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.211396 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.395065 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.447327 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.600041 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.600105 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.647652 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.807306 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.807705 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.862212 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.883414 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.976691 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmr8d\" (UniqueName: \"kubernetes.io/projected/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-kube-api-access-rmr8d\") pod \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.976838 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-utilities\") pod \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.976970 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-catalog-content\") pod \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.980011 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-utilities" (OuterVolumeSpecName: "utilities") pod "2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" (UID: "2c75ea2b-3f96-47c6-a70b-ef520d82a3fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.992486 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-kube-api-access-rmr8d" (OuterVolumeSpecName: "kube-api-access-rmr8d") pod "2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" (UID: "2c75ea2b-3f96-47c6-a70b-ef520d82a3fa"). InnerVolumeSpecName "kube-api-access-rmr8d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.992886 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" (UID: "2c75ea2b-3f96-47c6-a70b-ef520d82a3fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.078425 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.078478 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.078502 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rmr8d\" (UniqueName: \"kubernetes.io/projected/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-kube-api-access-rmr8d\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.198960 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv288" event={"ID":"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa","Type":"ContainerDied","Data":"a1c222f8566d6eeedc3932944e3dca34068066d180f7b69bf128f26076481b1b"} Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.199053 5108 scope.go:117] "RemoveContainer" containerID="f69389c32201712636c553d4608b07ef227f9bb8555914fc6850f406b4363fe6" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.199330 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.212101 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.265319 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.265848 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv288"] Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.269034 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.272488 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv288"] Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.281838 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.338958 5108 scope.go:117] "RemoveContainer" containerID="04829b5f755d429edab97e4438b063d5bde6a76582a91c95f9ffc7a26e491127" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.362393 5108 scope.go:117] "RemoveContainer" containerID="cf5c6a2438aea906e6d82a2f7c0400d982272ffc4bbb055c232a1e2fffedf93d" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.569677 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" path="/var/lib/kubelet/pods/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa/volumes" Feb 02 00:13:12 crc kubenswrapper[5108]: I0202 00:13:12.005161 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9ss2j"] Feb 02 00:13:13 crc kubenswrapper[5108]: I0202 00:13:13.008420 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgmw6"] Feb 02 00:13:13 crc kubenswrapper[5108]: I0202 00:13:13.212811 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9ss2j" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="registry-server" containerID="cri-o://a863dcfbfd0957bb6d04ba9b952871d33c859aed1552b5491529a2c3d101a795" gracePeriod=2 Feb 02 00:13:13 crc kubenswrapper[5108]: I0202 00:13:13.212917 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jgmw6" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="registry-server" containerID="cri-o://491616bba6f580cdfcad1db207711f26c90dd6c13b2aeba8831681ffd74d9b1d" gracePeriod=2 Feb 02 00:13:13 crc kubenswrapper[5108]: I0202 00:13:13.723190 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:13:13 crc kubenswrapper[5108]: I0202 00:13:13.774492 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:13:14 crc kubenswrapper[5108]: I0202 00:13:14.150105 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:13:14 crc kubenswrapper[5108]: I0202 00:13:14.263335 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:13:15 crc kubenswrapper[5108]: I0202 00:13:15.232527 5108 generic.go:358] "Generic (PLEG): container finished" podID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerID="a863dcfbfd0957bb6d04ba9b952871d33c859aed1552b5491529a2c3d101a795" exitCode=0 Feb 02 00:13:15 crc kubenswrapper[5108]: I0202 00:13:15.232639 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerDied","Data":"a863dcfbfd0957bb6d04ba9b952871d33c859aed1552b5491529a2c3d101a795"} Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.225160 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.244050 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.244080 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerDied","Data":"bf1f4e8893cf7d38c33c0c17e67ab9bd9445bacbc6cedb29875eaf455b2ef485"} Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.244144 5108 scope.go:117] "RemoveContainer" containerID="a863dcfbfd0957bb6d04ba9b952871d33c859aed1552b5491529a2c3d101a795" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.250044 5108 generic.go:358] "Generic (PLEG): container finished" podID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerID="491616bba6f580cdfcad1db207711f26c90dd6c13b2aeba8831681ffd74d9b1d" exitCode=0 Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.250093 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerDied","Data":"491616bba6f580cdfcad1db207711f26c90dd6c13b2aeba8831681ffd74d9b1d"} Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.311934 5108 scope.go:117] "RemoveContainer" containerID="dbd274483dff3718d495129bfcddb0bed6e580e217c4193576318ad2011f04ba" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.315780 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-utilities\") pod \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.315922 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmjzg\" (UniqueName: \"kubernetes.io/projected/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-kube-api-access-dmjzg\") pod \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.316114 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-catalog-content\") pod \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.323173 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-kube-api-access-dmjzg" (OuterVolumeSpecName: "kube-api-access-dmjzg") pod "fa0ae7f1-2fcb-48e2-9553-1144cc082b96" (UID: "fa0ae7f1-2fcb-48e2-9553-1144cc082b96"). InnerVolumeSpecName "kube-api-access-dmjzg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.329178 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-utilities" (OuterVolumeSpecName: "utilities") pod "fa0ae7f1-2fcb-48e2-9553-1144cc082b96" (UID: "fa0ae7f1-2fcb-48e2-9553-1144cc082b96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.332147 5108 scope.go:117] "RemoveContainer" containerID="dc6f982b2d56c1abb172d98e66aa0c15b24571bc47876df35d5985b98e039d3c" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.363083 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa0ae7f1-2fcb-48e2-9553-1144cc082b96" (UID: "fa0ae7f1-2fcb-48e2-9553-1144cc082b96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.417403 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.417438 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmjzg\" (UniqueName: \"kubernetes.io/projected/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-kube-api-access-dmjzg\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.417447 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.440271 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.518381 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-catalog-content\") pod \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.518586 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-utilities\") pod \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.518754 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwm9f\" (UniqueName: \"kubernetes.io/projected/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-kube-api-access-dwm9f\") pod \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.519625 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-utilities" (OuterVolumeSpecName: "utilities") pod "41859985-fc1d-4d4e-bbe8-b0a99955ac0a" (UID: "41859985-fc1d-4d4e-bbe8-b0a99955ac0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.523700 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-kube-api-access-dwm9f" (OuterVolumeSpecName: "kube-api-access-dwm9f") pod "41859985-fc1d-4d4e-bbe8-b0a99955ac0a" (UID: "41859985-fc1d-4d4e-bbe8-b0a99955ac0a"). InnerVolumeSpecName "kube-api-access-dwm9f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.568059 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41859985-fc1d-4d4e-bbe8-b0a99955ac0a" (UID: "41859985-fc1d-4d4e-bbe8-b0a99955ac0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.577461 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9ss2j"] Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.590753 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9ss2j"] Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.621935 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.621978 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dwm9f\" (UniqueName: \"kubernetes.io/projected/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-kube-api-access-dwm9f\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.622019 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.289878 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerDied","Data":"6f0c7fb95227a7df0062f6ca54786e7bc1b0d3aad99b375a28cf44d515d2f1be"} Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.289970 5108 scope.go:117] "RemoveContainer" containerID="491616bba6f580cdfcad1db207711f26c90dd6c13b2aeba8831681ffd74d9b1d" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.290276 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.319046 5108 scope.go:117] "RemoveContainer" containerID="577ed71913c5b73811c39461c442deeaa9df5e912b98fd354ac4ff80e8d37c9d" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.354806 5108 scope.go:117] "RemoveContainer" containerID="b91c60dbd115b4b7905f65ba4aae50ffb73107e888d42e0249b2d0b2231508b8" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.358536 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgmw6"] Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.362315 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jgmw6"] Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.567090 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" path="/var/lib/kubelet/pods/41859985-fc1d-4d4e-bbe8-b0a99955ac0a/volumes" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.569044 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" path="/var/lib/kubelet/pods/fa0ae7f1-2fcb-48e2-9553-1144cc082b96/volumes" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.806475 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwwt9"] Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.806771 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pwwt9" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="registry-server" containerID="cri-o://4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc" gracePeriod=2 Feb 02 00:13:18 crc kubenswrapper[5108]: I0202 00:13:18.968055 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.061947 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghx28\" (UniqueName: \"kubernetes.io/projected/dfe89a3e-59b8-4707-863b-ed23bea6f273-kube-api-access-ghx28\") pod \"dfe89a3e-59b8-4707-863b-ed23bea6f273\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.062018 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-utilities\") pod \"dfe89a3e-59b8-4707-863b-ed23bea6f273\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.062077 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-catalog-content\") pod \"dfe89a3e-59b8-4707-863b-ed23bea6f273\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.064456 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-utilities" (OuterVolumeSpecName: "utilities") pod "dfe89a3e-59b8-4707-863b-ed23bea6f273" (UID: "dfe89a3e-59b8-4707-863b-ed23bea6f273"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.076433 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe89a3e-59b8-4707-863b-ed23bea6f273-kube-api-access-ghx28" (OuterVolumeSpecName: "kube-api-access-ghx28") pod "dfe89a3e-59b8-4707-863b-ed23bea6f273" (UID: "dfe89a3e-59b8-4707-863b-ed23bea6f273"). InnerVolumeSpecName "kube-api-access-ghx28". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.164420 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ghx28\" (UniqueName: \"kubernetes.io/projected/dfe89a3e-59b8-4707-863b-ed23bea6f273-kube-api-access-ghx28\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.164499 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.217028 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfe89a3e-59b8-4707-863b-ed23bea6f273" (UID: "dfe89a3e-59b8-4707-863b-ed23bea6f273"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.267046 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.316049 5108 generic.go:358] "Generic (PLEG): container finished" podID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerID="4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc" exitCode=0 Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.316170 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerDied","Data":"4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc"} Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.316268 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerDied","Data":"1d76080a17da74a3f5f557cd80381d1dd1a2baeca402f2c1f50f111d9dcbf48c"} Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.316270 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.316297 5108 scope.go:117] "RemoveContainer" containerID="4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.369371 5108 scope.go:117] "RemoveContainer" containerID="bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.382767 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwwt9"] Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.386906 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pwwt9"] Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.417394 5108 scope.go:117] "RemoveContainer" containerID="0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.448007 5108 scope.go:117] "RemoveContainer" containerID="4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc" Feb 02 00:13:19 crc kubenswrapper[5108]: E0202 00:13:19.450721 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc\": container with ID starting with 4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc not found: ID does not exist" containerID="4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.450876 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc"} err="failed to get container status \"4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc\": rpc error: code = NotFound desc = could not find container \"4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc\": container with ID starting with 4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc not found: ID does not exist" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.450973 5108 scope.go:117] "RemoveContainer" containerID="bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a" Feb 02 00:13:19 crc kubenswrapper[5108]: E0202 00:13:19.451910 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a\": container with ID starting with bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a not found: ID does not exist" containerID="bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.452019 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a"} err="failed to get container status \"bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a\": rpc error: code = NotFound desc = could not find container \"bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a\": container with ID starting with bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a not found: ID does not exist" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.452102 5108 scope.go:117] "RemoveContainer" containerID="0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159" Feb 02 00:13:19 crc kubenswrapper[5108]: E0202 00:13:19.452934 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159\": container with ID starting with 0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159 not found: ID does not exist" containerID="0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.452980 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159"} err="failed to get container status \"0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159\": rpc error: code = NotFound desc = could not find container \"0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159\": container with ID starting with 0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159 not found: ID does not exist" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.574313 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" path="/var/lib/kubelet/pods/dfe89a3e-59b8-4707-863b-ed23bea6f273/volumes" Feb 02 00:13:27 crc kubenswrapper[5108]: I0202 00:13:27.956399 5108 ???:1] "http: TLS handshake error from 192.168.126.11:48570: no serving certificate available for the kubelet" Feb 02 00:13:30 crc kubenswrapper[5108]: I0202 00:13:30.311942 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:13:34 crc kubenswrapper[5108]: I0202 00:13:34.772977 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-4lq2m"] Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.526051 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.527895 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528072 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528202 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528357 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528494 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528609 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528722 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528849 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528972 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dcbaa597-5b18-4219-b757-5f10e86a2c1c" containerName="image-pruner" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529084 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbaa597-5b18-4219-b757-5f10e86a2c1c" containerName="image-pruner" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529200 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0c4e3b-102b-4208-9aea-f2c48cf52ac0" containerName="pruner" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529359 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0c4e3b-102b-4208-9aea-f2c48cf52ac0" containerName="pruner" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529481 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529602 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529785 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529908 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.530027 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.530157 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.530389 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.530522 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.530643 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.530874 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531027 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531144 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531302 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531435 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531567 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531677 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531788 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532003 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532342 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="fa0c4e3b-102b-4208-9aea-f2c48cf52ac0" containerName="pruner" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532497 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532621 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532739 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532856 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="dcbaa597-5b18-4219-b757-5f10e86a2c1c" containerName="image-pruner" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532973 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.533097 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="registry-server" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.086207 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.098946 5108 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.099182 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.099986 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022" gracePeriod=15 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.100264 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df" gracePeriod=15 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.100045 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb" gracePeriod=15 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.100108 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb" gracePeriod=15 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.100034 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448" gracePeriod=15 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.103801 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.103958 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.104308 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.104554 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.104753 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.104960 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.105079 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.105386 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.105597 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.105712 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.105863 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.105983 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.106097 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.106208 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.106365 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.106474 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.106593 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.106698 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.108661 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.108807 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.108948 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.109076 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.109196 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.109366 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.109575 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.109696 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.110001 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.110141 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.110482 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.130425 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.130504 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.130581 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.130615 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.130844 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232346 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232441 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232541 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232546 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232586 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232593 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232618 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232635 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232647 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232675 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232705 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232726 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232745 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232816 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232815 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.236332 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: E0202 00:13:36.236896 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.237375 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.333686 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.333868 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.334394 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.334680 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.334727 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.334761 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.335174 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.335303 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.335317 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.335621 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: E0202 00:13:36.381407 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189045a22bab678b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:13:36.376276875 +0000 UTC m=+215.651773805,LastTimestamp:2026-02-02 00:13:36.376276875 +0000 UTC m=+215.651773805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.427312 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.428731 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.429414 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df" exitCode=0 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.429449 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022" exitCode=0 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.429459 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448" exitCode=0 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.429467 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb" exitCode=2 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.429539 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.431663 5108 generic.go:358] "Generic (PLEG): container finished" podID="baa9da1f-16dc-411f-8968-783a0e3d1efd" containerID="491b9dc33be340ea8ece574e78c47522d583627c53b52c926c6593004894e871" exitCode=0 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.431828 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"baa9da1f-16dc-411f-8968-783a0e3d1efd","Type":"ContainerDied","Data":"491b9dc33be340ea8ece574e78c47522d583627c53b52c926c6593004894e871"} Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.433019 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.433371 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"7ebfc3060fac9640de69ae937ab85bafacacb465f0f768c08164103023429070"} Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.433554 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.665146 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.665245 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 02 00:13:36 crc kubenswrapper[5108]: E0202 00:13:36.674669 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189045a22bab678b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:13:36.376276875 +0000 UTC m=+215.651773805,LastTimestamp:2026-02-02 00:13:36.376276875 +0000 UTC m=+215.651773805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.441499 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.445304 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454"} Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.445691 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.446183 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.446552 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:37 crc kubenswrapper[5108]: E0202 00:13:37.447218 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.677963 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.678624 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.753825 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-var-lock\") pod \"baa9da1f-16dc-411f-8968-783a0e3d1efd\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.753972 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-kubelet-dir\") pod \"baa9da1f-16dc-411f-8968-783a0e3d1efd\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.753992 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-var-lock" (OuterVolumeSpecName: "var-lock") pod "baa9da1f-16dc-411f-8968-783a0e3d1efd" (UID: "baa9da1f-16dc-411f-8968-783a0e3d1efd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.754072 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa9da1f-16dc-411f-8968-783a0e3d1efd-kube-api-access\") pod \"baa9da1f-16dc-411f-8968-783a0e3d1efd\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.754175 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "baa9da1f-16dc-411f-8968-783a0e3d1efd" (UID: "baa9da1f-16dc-411f-8968-783a0e3d1efd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.754854 5108 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.754892 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.760556 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa9da1f-16dc-411f-8968-783a0e3d1efd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "baa9da1f-16dc-411f-8968-783a0e3d1efd" (UID: "baa9da1f-16dc-411f-8968-783a0e3d1efd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.856445 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa9da1f-16dc-411f-8968-783a0e3d1efd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:38 crc kubenswrapper[5108]: I0202 00:13:38.454216 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"baa9da1f-16dc-411f-8968-783a0e3d1efd","Type":"ContainerDied","Data":"963c03dd266c5096ab10583ebcc3deeb02b48308e6dbedbd6e48c0e23e5a63d6"} Feb 02 00:13:38 crc kubenswrapper[5108]: I0202 00:13:38.454911 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="963c03dd266c5096ab10583ebcc3deeb02b48308e6dbedbd6e48c0e23e5a63d6" Feb 02 00:13:38 crc kubenswrapper[5108]: I0202 00:13:38.454385 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:38 crc kubenswrapper[5108]: I0202 00:13:38.454370 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:13:38 crc kubenswrapper[5108]: E0202 00:13:38.456076 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:38 crc kubenswrapper[5108]: I0202 00:13:38.472407 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.074202 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.075367 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.075956 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.076243 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177490 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177539 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177693 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177712 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177766 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177834 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177841 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177863 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.178152 5108 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.178176 5108 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.178188 5108 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.178472 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.179487 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.279542 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.279572 5108 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.463556 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.464155 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb" exitCode=0 Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.464338 5108 scope.go:117] "RemoveContainer" containerID="2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.464457 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.482095 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.482605 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.490078 5108 scope.go:117] "RemoveContainer" containerID="d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.506924 5108 scope.go:117] "RemoveContainer" containerID="ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.521014 5108 scope.go:117] "RemoveContainer" containerID="3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.536378 5108 scope.go:117] "RemoveContainer" containerID="f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.555830 5108 scope.go:117] "RemoveContainer" containerID="f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.566774 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.623395 5108 scope.go:117] "RemoveContainer" containerID="2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df" Feb 02 00:13:39 crc kubenswrapper[5108]: E0202 00:13:39.624356 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df\": container with ID starting with 2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df not found: ID does not exist" containerID="2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.624401 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df"} err="failed to get container status \"2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df\": rpc error: code = NotFound desc = could not find container \"2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df\": container with ID starting with 2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df not found: ID does not exist" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.624430 5108 scope.go:117] "RemoveContainer" containerID="d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022" Feb 02 00:13:39 crc kubenswrapper[5108]: E0202 00:13:39.624833 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\": container with ID starting with d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022 not found: ID does not exist" containerID="d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.624893 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022"} err="failed to get container status \"d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\": rpc error: code = NotFound desc = could not find container \"d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\": container with ID starting with d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022 not found: ID does not exist" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.624932 5108 scope.go:117] "RemoveContainer" containerID="ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448" Feb 02 00:13:39 crc kubenswrapper[5108]: E0202 00:13:39.625775 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\": container with ID starting with ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448 not found: ID does not exist" containerID="ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.625810 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448"} err="failed to get container status \"ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\": rpc error: code = NotFound desc = could not find container \"ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\": container with ID starting with ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448 not found: ID does not exist" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.625837 5108 scope.go:117] "RemoveContainer" containerID="3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb" Feb 02 00:13:39 crc kubenswrapper[5108]: E0202 00:13:39.626280 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\": container with ID starting with 3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb not found: ID does not exist" containerID="3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.626308 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb"} err="failed to get container status \"3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\": rpc error: code = NotFound desc = could not find container \"3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\": container with ID starting with 3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb not found: ID does not exist" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.626327 5108 scope.go:117] "RemoveContainer" containerID="f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb" Feb 02 00:13:39 crc kubenswrapper[5108]: E0202 00:13:39.626596 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\": container with ID starting with f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb not found: ID does not exist" containerID="f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.626624 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb"} err="failed to get container status \"f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\": rpc error: code = NotFound desc = could not find container \"f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\": container with ID starting with f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb not found: ID does not exist" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.626641 5108 scope.go:117] "RemoveContainer" containerID="f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2" Feb 02 00:13:39 crc kubenswrapper[5108]: E0202 00:13:39.627054 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\": container with ID starting with f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2 not found: ID does not exist" containerID="f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.627086 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2"} err="failed to get container status \"f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\": rpc error: code = NotFound desc = could not find container \"f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\": container with ID starting with f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2 not found: ID does not exist" Feb 02 00:13:41 crc kubenswrapper[5108]: I0202 00:13:41.561404 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.238245 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.238998 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.239191 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.239459 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.239740 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: I0202 00:13:46.239771 5108 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.240180 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="200ms" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.441544 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="400ms" Feb 02 00:13:46 crc kubenswrapper[5108]: I0202 00:13:46.557087 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:46 crc kubenswrapper[5108]: I0202 00:13:46.558167 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: I0202 00:13:46.571753 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:46 crc kubenswrapper[5108]: I0202 00:13:46.571785 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.572123 5108 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:46 crc kubenswrapper[5108]: I0202 00:13:46.572402 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:46 crc kubenswrapper[5108]: W0202 00:13:46.594580 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-3ba4ac9dfdb1b77e559293942e461734a57491dc89becac056b2cf31aa5c10ba WatchSource:0}: Error finding container 3ba4ac9dfdb1b77e559293942e461734a57491dc89becac056b2cf31aa5c10ba: Status 404 returned error can't find the container with id 3ba4ac9dfdb1b77e559293942e461734a57491dc89becac056b2cf31aa5c10ba Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.676283 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189045a22bab678b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:13:36.376276875 +0000 UTC m=+215.651773805,LastTimestamp:2026-02-02 00:13:36.376276875 +0000 UTC m=+215.651773805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.841972 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="800ms" Feb 02 00:13:47 crc kubenswrapper[5108]: I0202 00:13:47.512932 5108 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="dac8f2ddfdc264820f0cd3ef205bc5581d02f2a8a465372a19db14b35634b955" exitCode=0 Feb 02 00:13:47 crc kubenswrapper[5108]: I0202 00:13:47.513054 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"dac8f2ddfdc264820f0cd3ef205bc5581d02f2a8a465372a19db14b35634b955"} Feb 02 00:13:47 crc kubenswrapper[5108]: I0202 00:13:47.513149 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3ba4ac9dfdb1b77e559293942e461734a57491dc89becac056b2cf31aa5c10ba"} Feb 02 00:13:47 crc kubenswrapper[5108]: I0202 00:13:47.513676 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:47 crc kubenswrapper[5108]: I0202 00:13:47.513704 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:47 crc kubenswrapper[5108]: E0202 00:13:47.515056 5108 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:47 crc kubenswrapper[5108]: I0202 00:13:47.515697 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:47 crc kubenswrapper[5108]: E0202 00:13:47.644684 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="1.6s" Feb 02 00:13:48 crc kubenswrapper[5108]: I0202 00:13:48.529455 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c9a7dc95c0f7f9b83b6e5d752d2720b6307e2eda3e9e6ea2b1d68073e3fb0915"} Feb 02 00:13:48 crc kubenswrapper[5108]: I0202 00:13:48.529833 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"39469cde5ec0c8d8b15790c95f4c449cccd35906116e1dc7076f1bc0c83e2eab"} Feb 02 00:13:48 crc kubenswrapper[5108]: I0202 00:13:48.529845 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"98bb12bf0e05d03b24d7490a26732ed32cd9b9185c2fcd0ce8a8d9fb849d4625"} Feb 02 00:13:49 crc kubenswrapper[5108]: I0202 00:13:49.539438 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"e977c8b1685dfdee63acf55f866dc21dc07f705c8810ccbdd3349085e9469d2f"} Feb 02 00:13:49 crc kubenswrapper[5108]: I0202 00:13:49.539493 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"83ff388bf90fa95675be6228bb2c49cd302d4f9170ee529be0b002ec0d3cf05a"} Feb 02 00:13:49 crc kubenswrapper[5108]: I0202 00:13:49.539895 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:49 crc kubenswrapper[5108]: I0202 00:13:49.539933 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:49 crc kubenswrapper[5108]: I0202 00:13:49.539967 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:50 crc kubenswrapper[5108]: I0202 00:13:50.920158 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:13:50 crc kubenswrapper[5108]: I0202 00:13:50.920301 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.562747 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.562832 5108 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca" exitCode=1 Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.565780 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca"} Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.566604 5108 scope.go:117] "RemoveContainer" containerID="88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca" Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.573045 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.573094 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.581529 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:52 crc kubenswrapper[5108]: I0202 00:13:52.575779 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:13:52 crc kubenswrapper[5108]: I0202 00:13:52.576171 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"bb6fb08ab6c4d00166a440141f5cb57ca69ba366f1f91b9a802c4c4dca7cdbd8"} Feb 02 00:13:54 crc kubenswrapper[5108]: I0202 00:13:54.549556 5108 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:54 crc kubenswrapper[5108]: I0202 00:13:54.549881 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:54 crc kubenswrapper[5108]: I0202 00:13:54.588992 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:54 crc kubenswrapper[5108]: I0202 00:13:54.589026 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:54 crc kubenswrapper[5108]: I0202 00:13:54.593719 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:54 crc kubenswrapper[5108]: I0202 00:13:54.616048 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="719b3a77-5020-470c-bf5f-ad05197649a8" Feb 02 00:13:55 crc kubenswrapper[5108]: I0202 00:13:55.594701 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:55 crc kubenswrapper[5108]: I0202 00:13:55.594729 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:55 crc kubenswrapper[5108]: I0202 00:13:55.598398 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="719b3a77-5020-470c-bf5f-ad05197649a8" Feb 02 00:13:57 crc kubenswrapper[5108]: I0202 00:13:57.176888 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:13:57 crc kubenswrapper[5108]: I0202 00:13:57.183150 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:13:57 crc kubenswrapper[5108]: I0202 00:13:57.605746 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:13:59 crc kubenswrapper[5108]: I0202 00:13:59.805485 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerName="oauth-openshift" containerID="cri-o://83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826" gracePeriod=15 Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.280807 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386177 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-serving-cert\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386282 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-provider-selection\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386331 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-policies\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386359 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-error\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386416 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-idp-0-file-data\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386438 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-dir\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386469 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-ocp-branding-template\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386521 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gz45\" (UniqueName: \"kubernetes.io/projected/03927a55-b629-4f9c-be0f-3499aba5b90e-kube-api-access-8gz45\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386579 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-session\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386634 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-router-certs\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386666 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-trusted-ca-bundle\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386698 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-cliconfig\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386768 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-service-ca\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386820 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-login\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.387794 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.388504 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.388740 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.388835 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.389209 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.393613 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.406052 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03927a55-b629-4f9c-be0f-3499aba5b90e-kube-api-access-8gz45" (OuterVolumeSpecName: "kube-api-access-8gz45") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "kube-api-access-8gz45". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.406285 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.407972 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.408292 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.408867 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.412350 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.412638 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.412757 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487846 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487893 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487909 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487923 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487955 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487966 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487977 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487990 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.488006 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.488016 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.488028 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.488039 5108 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.488049 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.488061 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8gz45\" (UniqueName: \"kubernetes.io/projected/03927a55-b629-4f9c-be0f-3499aba5b90e-kube-api-access-8gz45\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.620871 5108 generic.go:358] "Generic (PLEG): container finished" podID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerID="83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826" exitCode=0 Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.620980 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" event={"ID":"03927a55-b629-4f9c-be0f-3499aba5b90e","Type":"ContainerDied","Data":"83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826"} Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.621017 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" event={"ID":"03927a55-b629-4f9c-be0f-3499aba5b90e","Type":"ContainerDied","Data":"ab4178c0f93978aa03540a620121f5f5624450b66655822381ed4a7581fad072"} Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.621017 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.621039 5108 scope.go:117] "RemoveContainer" containerID="83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.649540 5108 scope.go:117] "RemoveContainer" containerID="83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826" Feb 02 00:14:00 crc kubenswrapper[5108]: E0202 00:14:00.650105 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826\": container with ID starting with 83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826 not found: ID does not exist" containerID="83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.650148 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826"} err="failed to get container status \"83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826\": rpc error: code = NotFound desc = could not find container \"83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826\": container with ID starting with 83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826 not found: ID does not exist" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.689646 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.833844 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.849116 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 02 00:14:01 crc kubenswrapper[5108]: I0202 00:14:01.174042 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 02 00:14:01 crc kubenswrapper[5108]: I0202 00:14:01.831470 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 02 00:14:01 crc kubenswrapper[5108]: I0202 00:14:01.834541 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 02 00:14:02 crc kubenswrapper[5108]: I0202 00:14:02.255336 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 02 00:14:02 crc kubenswrapper[5108]: I0202 00:14:02.271822 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:02 crc kubenswrapper[5108]: I0202 00:14:02.409596 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:02 crc kubenswrapper[5108]: I0202 00:14:02.970633 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 02 00:14:03 crc kubenswrapper[5108]: I0202 00:14:03.554499 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 02 00:14:03 crc kubenswrapper[5108]: I0202 00:14:03.749690 5108 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:14:04 crc kubenswrapper[5108]: I0202 00:14:04.002970 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 02 00:14:04 crc kubenswrapper[5108]: I0202 00:14:04.308167 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 02 00:14:04 crc kubenswrapper[5108]: I0202 00:14:04.378257 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 02 00:14:04 crc kubenswrapper[5108]: I0202 00:14:04.834042 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 02 00:14:05 crc kubenswrapper[5108]: I0202 00:14:05.336131 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 02 00:14:05 crc kubenswrapper[5108]: I0202 00:14:05.359854 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 02 00:14:05 crc kubenswrapper[5108]: I0202 00:14:05.412163 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 02 00:14:05 crc kubenswrapper[5108]: I0202 00:14:05.434053 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 02 00:14:05 crc kubenswrapper[5108]: I0202 00:14:05.505758 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 02 00:14:06 crc kubenswrapper[5108]: I0202 00:14:06.363381 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 02 00:14:06 crc kubenswrapper[5108]: I0202 00:14:06.389528 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 02 00:14:06 crc kubenswrapper[5108]: I0202 00:14:06.855075 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 02 00:14:07 crc kubenswrapper[5108]: I0202 00:14:07.222976 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 02 00:14:07 crc kubenswrapper[5108]: I0202 00:14:07.625917 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 02 00:14:07 crc kubenswrapper[5108]: I0202 00:14:07.688135 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 02 00:14:08 crc kubenswrapper[5108]: I0202 00:14:08.308855 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 02 00:14:08 crc kubenswrapper[5108]: I0202 00:14:08.615171 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:14:08 crc kubenswrapper[5108]: I0202 00:14:08.617835 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:08 crc kubenswrapper[5108]: I0202 00:14:08.876164 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.135166 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.293705 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.295285 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.513221 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.624653 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.631639 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.648304 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.728604 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.731927 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.963936 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.031293 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.173779 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.194479 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.452840 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.484779 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.492736 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.651683 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.681843 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.695497 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.722993 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.726474 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.880561 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.935631 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.064641 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.065281 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.112796 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.125944 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.140007 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.247636 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.248524 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.301638 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.345016 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.397816 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.507994 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.573252 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.573433 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.584100 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.631004 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.639357 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.813037 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.838985 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.899677 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.997650 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.064176 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.183699 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.263764 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.303030 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.320986 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.322756 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.372485 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.479462 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.562674 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.711526 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.806410 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.835325 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.836031 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.862220 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.898223 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.945242 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.975244 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.036456 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.050978 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.128024 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.191628 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.199715 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.266689 5108 ???:1] "http: TLS handshake error from 192.168.126.11:54534: no serving certificate available for the kubelet" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.288380 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.438553 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.451531 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.453788 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.475710 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.563934 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.634013 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.801557 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.803803 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.826068 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.838615 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.858731 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.881821 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.896394 5108 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.924371 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.110503 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.147435 5108 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.152810 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-4lq2m","openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.152885 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-967dcd4bb-8x5dz","openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153376 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153409 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153544 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" containerName="installer" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153564 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" containerName="installer" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153576 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerName="oauth-openshift" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153582 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerName="oauth-openshift" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153705 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" containerName="installer" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153723 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerName="oauth-openshift" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.203566 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.203740 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.207378 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.207929 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.208645 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.208773 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.208826 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.208844 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.209072 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.210569 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.211071 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.211848 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.212381 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.212717 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.212847 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.213778 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.218408 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.226163 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.229416 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.229397396 podStartE2EDuration="20.229397396s" podCreationTimestamp="2026-02-02 00:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:14:14.227122024 +0000 UTC m=+253.502618964" watchObservedRunningTime="2026-02-02 00:14:14.229397396 +0000 UTC m=+253.504894326" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.235491 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.249896 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.278508 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298743 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298789 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298821 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-audit-policies\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298844 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n47dq\" (UniqueName: \"kubernetes.io/projected/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-kube-api-access-n47dq\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298870 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298894 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-audit-dir\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298914 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298937 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-router-certs\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298981 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.299000 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-service-ca\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.299018 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-session\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.299049 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.299071 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-login\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.299092 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-error\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400099 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-router-certs\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400259 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400291 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-service-ca\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400317 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-session\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400353 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400382 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-login\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400411 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-error\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400451 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400483 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400521 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-audit-policies\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400545 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n47dq\" (UniqueName: \"kubernetes.io/projected/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-kube-api-access-n47dq\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400582 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400612 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-audit-dir\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400638 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.401150 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.401195 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-audit-dir\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.402112 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-audit-policies\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.402276 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-service-ca\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.403066 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.407847 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.407905 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.408310 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-login\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.409002 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.410161 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.410817 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-error\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.411118 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-router-certs\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.413207 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-session\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.424534 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n47dq\" (UniqueName: \"kubernetes.io/projected/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-kube-api-access-n47dq\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.424581 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.468019 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.524430 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.561290 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.612369 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.797677 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.840292 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.845078 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.864795 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.876726 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.915985 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.971868 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.095730 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.213137 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.229516 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.307400 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.411115 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.486523 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.509306 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.565147 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" path="/var/lib/kubelet/pods/03927a55-b629-4f9c-be0f-3499aba5b90e/volumes" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.642798 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.671650 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.680265 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.768137 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.808524 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.837353 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.867941 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.905741 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.922124 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.937268 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.092078 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.095346 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.232759 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.259344 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.288521 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.292662 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.325122 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.366400 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.407975 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.580899 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.635854 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.645611 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.712873 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.783492 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.802464 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.813386 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.930588 5108 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.930881 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454" gracePeriod=5 Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.934652 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.943572 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.977070 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.997451 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.005674 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.023004 5108 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.057597 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.155539 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.171714 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.196129 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.331147 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.450193 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.513960 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.514080 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.579678 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.604311 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.823771 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.860818 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.908957 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.972904 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.097602 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.104776 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.261208 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.401084 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.511391 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.613840 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.641356 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.653765 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.725815 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.795355 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.834940 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.940721 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.027362 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.204918 5108 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.226897 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.290437 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.332118 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.470282 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.477531 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.640970 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.645102 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.663753 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.704960 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.739783 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.865004 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.890574 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.019118 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.115465 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.207432 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.375764 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.385016 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.435055 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.547999 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.632551 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.681685 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.755766 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.917894 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.919785 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.919871 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.953837 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.972867 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.099305 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.181755 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.262446 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.285400 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.402790 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.739648 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.825388 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.071733 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.135726 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.226385 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.253962 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.283985 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.315380 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.437870 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.512934 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.536442 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.536536 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.542063 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641328 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641368 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641391 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641422 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641473 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641467 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641503 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641477 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641492 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641859 5108 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641872 5108 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641882 5108 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641890 5108 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.648246 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.650842 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.669313 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.743125 5108 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.814160 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.836794 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.836847 5108 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454" exitCode=137 Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.836913 5108 scope.go:117] "RemoveContainer" containerID="217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.836963 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.852600 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.865092 5108 scope.go:117] "RemoveContainer" containerID="217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454" Feb 02 00:14:22 crc kubenswrapper[5108]: E0202 00:14:22.865903 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454\": container with ID starting with 217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454 not found: ID does not exist" containerID="217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.865938 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454"} err="failed to get container status \"217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454\": rpc error: code = NotFound desc = could not find container \"217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454\": container with ID starting with 217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454 not found: ID does not exist" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.940718 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 02 00:14:23 crc kubenswrapper[5108]: I0202 00:14:23.366417 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-967dcd4bb-8x5dz"] Feb 02 00:14:23 crc kubenswrapper[5108]: I0202 00:14:23.569730 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Feb 02 00:14:23 crc kubenswrapper[5108]: I0202 00:14:23.631847 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-967dcd4bb-8x5dz"] Feb 02 00:14:23 crc kubenswrapper[5108]: I0202 00:14:23.844520 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" event={"ID":"7e6a5122-2dba-4b6d-93a5-734a6f188f7d","Type":"ContainerStarted","Data":"fadcd02b8178ba6c927c1e14a19f08cae37accd4ceb3ffc44455722ae13a67df"} Feb 02 00:14:24 crc kubenswrapper[5108]: I0202 00:14:24.269087 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 02 00:14:24 crc kubenswrapper[5108]: I0202 00:14:24.853693 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" event={"ID":"7e6a5122-2dba-4b6d-93a5-734a6f188f7d","Type":"ContainerStarted","Data":"ba4a2094fadafcb6a6db42d22900f7e11c02ae09f9387cfbd516df4f873e920d"} Feb 02 00:14:24 crc kubenswrapper[5108]: I0202 00:14:24.855353 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:24 crc kubenswrapper[5108]: I0202 00:14:24.864021 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:24 crc kubenswrapper[5108]: I0202 00:14:24.883319 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" podStartSLOduration=50.883293817 podStartE2EDuration="50.883293817s" podCreationTimestamp="2026-02-02 00:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:14:24.880668506 +0000 UTC m=+264.156165516" watchObservedRunningTime="2026-02-02 00:14:24.883293817 +0000 UTC m=+264.158790777" Feb 02 00:14:39 crc kubenswrapper[5108]: I0202 00:14:39.972088 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f60e56b-3881-49ee-be41-5435327c1be3" containerID="17a3c312150e2ad187bcb50ece3a0a3479395c7e181149518d0b3bec568dcd5a" exitCode=0 Feb 02 00:14:39 crc kubenswrapper[5108]: I0202 00:14:39.972179 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" event={"ID":"7f60e56b-3881-49ee-be41-5435327c1be3","Type":"ContainerDied","Data":"17a3c312150e2ad187bcb50ece3a0a3479395c7e181149518d0b3bec568dcd5a"} Feb 02 00:14:39 crc kubenswrapper[5108]: I0202 00:14:39.973183 5108 scope.go:117] "RemoveContainer" containerID="17a3c312150e2ad187bcb50ece3a0a3479395c7e181149518d0b3bec568dcd5a" Feb 02 00:14:40 crc kubenswrapper[5108]: I0202 00:14:40.986815 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" event={"ID":"7f60e56b-3881-49ee-be41-5435327c1be3","Type":"ContainerStarted","Data":"5a87ce4dbe06f64afb1f619d8b0c573d04b896291877c1eda1d92c83341dfdde"} Feb 02 00:14:40 crc kubenswrapper[5108]: I0202 00:14:40.987848 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:14:40 crc kubenswrapper[5108]: I0202 00:14:40.988661 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.263418 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fc5pz"] Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.263668 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" containerID="cri-o://675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686" gracePeriod=30 Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.286364 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv"] Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.286946 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" containerID="cri-o://e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c" gracePeriod=30 Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.816841 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.818987 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.849782 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-567446f66d-rb24c"] Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850349 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850368 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850381 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850387 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850395 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850401 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850493 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850503 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850512 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.868490 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.874194 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-567446f66d-rb24c"] Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.878262 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg"] Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.888170 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg"] Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.888407 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940578 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-client-ca\") pod \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940662 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-config\") pod \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940742 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfk4d\" (UniqueName: \"kubernetes.io/projected/c6bb9533-ef42-4cf1-92de-3a011b1934b8-kube-api-access-tfk4d\") pod \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940777 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-proxy-ca-bundles\") pod \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940830 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-572g4\" (UniqueName: \"kubernetes.io/projected/ebaf16ae-d4df-42da-a1b5-03495d1ef713-kube-api-access-572g4\") pod \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940858 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6bb9533-ef42-4cf1-92de-3a011b1934b8-serving-cert\") pod \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940886 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6bb9533-ef42-4cf1-92de-3a011b1934b8-tmp\") pod \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940915 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-client-ca\") pod \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940948 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ebaf16ae-d4df-42da-a1b5-03495d1ef713-tmp\") pod \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940995 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaf16ae-d4df-42da-a1b5-03495d1ef713-serving-cert\") pod \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.941016 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-config\") pod \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.941652 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebaf16ae-d4df-42da-a1b5-03495d1ef713-tmp" (OuterVolumeSpecName: "tmp") pod "ebaf16ae-d4df-42da-a1b5-03495d1ef713" (UID: "ebaf16ae-d4df-42da-a1b5-03495d1ef713"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.941827 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6bb9533-ef42-4cf1-92de-3a011b1934b8-tmp" (OuterVolumeSpecName: "tmp") pod "c6bb9533-ef42-4cf1-92de-3a011b1934b8" (UID: "c6bb9533-ef42-4cf1-92de-3a011b1934b8"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.941945 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ebaf16ae-d4df-42da-a1b5-03495d1ef713" (UID: "ebaf16ae-d4df-42da-a1b5-03495d1ef713"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.942079 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-client-ca" (OuterVolumeSpecName: "client-ca") pod "ebaf16ae-d4df-42da-a1b5-03495d1ef713" (UID: "ebaf16ae-d4df-42da-a1b5-03495d1ef713"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.941159 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-tmp\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.942607 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-client-ca\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.942867 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-client-ca" (OuterVolumeSpecName: "client-ca") pod "c6bb9533-ef42-4cf1-92de-3a011b1934b8" (UID: "c6bb9533-ef42-4cf1-92de-3a011b1934b8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943037 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-serving-cert\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943077 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79198f63-420b-43d9-b3a1-bf017d820757-serving-cert\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943108 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-client-ca\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943132 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qwjx\" (UniqueName: \"kubernetes.io/projected/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-kube-api-access-4qwjx\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943161 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-config\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943183 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-proxy-ca-bundles\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943241 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7tng\" (UniqueName: \"kubernetes.io/projected/79198f63-420b-43d9-b3a1-bf017d820757-kube-api-access-n7tng\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943265 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/79198f63-420b-43d9-b3a1-bf017d820757-tmp\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943283 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-config\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943316 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943325 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6bb9533-ef42-4cf1-92de-3a011b1934b8-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943335 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943345 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ebaf16ae-d4df-42da-a1b5-03495d1ef713-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943354 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943761 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-config" (OuterVolumeSpecName: "config") pod "c6bb9533-ef42-4cf1-92de-3a011b1934b8" (UID: "c6bb9533-ef42-4cf1-92de-3a011b1934b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.955423 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6bb9533-ef42-4cf1-92de-3a011b1934b8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c6bb9533-ef42-4cf1-92de-3a011b1934b8" (UID: "c6bb9533-ef42-4cf1-92de-3a011b1934b8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.961333 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebaf16ae-d4df-42da-a1b5-03495d1ef713-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ebaf16ae-d4df-42da-a1b5-03495d1ef713" (UID: "ebaf16ae-d4df-42da-a1b5-03495d1ef713"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.963205 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebaf16ae-d4df-42da-a1b5-03495d1ef713-kube-api-access-572g4" (OuterVolumeSpecName: "kube-api-access-572g4") pod "ebaf16ae-d4df-42da-a1b5-03495d1ef713" (UID: "ebaf16ae-d4df-42da-a1b5-03495d1ef713"). InnerVolumeSpecName "kube-api-access-572g4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.966615 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-config" (OuterVolumeSpecName: "config") pod "ebaf16ae-d4df-42da-a1b5-03495d1ef713" (UID: "ebaf16ae-d4df-42da-a1b5-03495d1ef713"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.967641 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6bb9533-ef42-4cf1-92de-3a011b1934b8-kube-api-access-tfk4d" (OuterVolumeSpecName: "kube-api-access-tfk4d") pod "c6bb9533-ef42-4cf1-92de-3a011b1934b8" (UID: "c6bb9533-ef42-4cf1-92de-3a011b1934b8"). InnerVolumeSpecName "kube-api-access-tfk4d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.000124 5108 generic.go:358] "Generic (PLEG): container finished" podID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerID="675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686" exitCode=0 Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.000256 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.000285 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" event={"ID":"ebaf16ae-d4df-42da-a1b5-03495d1ef713","Type":"ContainerDied","Data":"675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686"} Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.000339 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" event={"ID":"ebaf16ae-d4df-42da-a1b5-03495d1ef713","Type":"ContainerDied","Data":"3158eaa8cced5445a37b12560efe834d0b215f5c202cf0145f728d9c8aaa5068"} Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.000362 5108 scope.go:117] "RemoveContainer" containerID="675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.002268 5108 generic.go:358] "Generic (PLEG): container finished" podID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerID="e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c" exitCode=0 Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.002379 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.002435 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" event={"ID":"c6bb9533-ef42-4cf1-92de-3a011b1934b8","Type":"ContainerDied","Data":"e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c"} Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.002454 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" event={"ID":"c6bb9533-ef42-4cf1-92de-3a011b1934b8","Type":"ContainerDied","Data":"683d5e48d4bbd76223bfa55ebb9faedf8bd6693391a55afaa0790e34cd786995"} Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.035350 5108 scope.go:117] "RemoveContainer" containerID="675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686" Feb 02 00:14:43 crc kubenswrapper[5108]: E0202 00:14:43.038032 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686\": container with ID starting with 675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686 not found: ID does not exist" containerID="675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.038063 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686"} err="failed to get container status \"675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686\": rpc error: code = NotFound desc = could not find container \"675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686\": container with ID starting with 675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686 not found: ID does not exist" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.038086 5108 scope.go:117] "RemoveContainer" containerID="e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055618 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n7tng\" (UniqueName: \"kubernetes.io/projected/79198f63-420b-43d9-b3a1-bf017d820757-kube-api-access-n7tng\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055705 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/79198f63-420b-43d9-b3a1-bf017d820757-tmp\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055734 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-config\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055833 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-tmp\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055879 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-client-ca\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055946 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-serving-cert\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055973 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79198f63-420b-43d9-b3a1-bf017d820757-serving-cert\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056032 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-client-ca\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056067 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4qwjx\" (UniqueName: \"kubernetes.io/projected/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-kube-api-access-4qwjx\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056111 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-config\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056148 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-proxy-ca-bundles\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056252 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaf16ae-d4df-42da-a1b5-03495d1ef713-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056263 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056275 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056288 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tfk4d\" (UniqueName: \"kubernetes.io/projected/c6bb9533-ef42-4cf1-92de-3a011b1934b8-kube-api-access-tfk4d\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056299 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-572g4\" (UniqueName: \"kubernetes.io/projected/ebaf16ae-d4df-42da-a1b5-03495d1ef713-kube-api-access-572g4\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056307 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6bb9533-ef42-4cf1-92de-3a011b1934b8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.057214 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-tmp\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.057645 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/79198f63-420b-43d9-b3a1-bf017d820757-tmp\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.058737 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-config\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.059463 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-client-ca\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.059605 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-client-ca\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.060602 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-config\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.062841 5108 scope.go:117] "RemoveContainer" containerID="e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c" Feb 02 00:14:43 crc kubenswrapper[5108]: E0202 00:14:43.068340 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c\": container with ID starting with e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c not found: ID does not exist" containerID="e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.068398 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c"} err="failed to get container status \"e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c\": rpc error: code = NotFound desc = could not find container \"e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c\": container with ID starting with e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c not found: ID does not exist" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.069290 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-serving-cert\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.069355 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fc5pz"] Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.070220 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-proxy-ca-bundles\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.075105 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79198f63-420b-43d9-b3a1-bf017d820757-serving-cert\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.081070 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qwjx\" (UniqueName: \"kubernetes.io/projected/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-kube-api-access-4qwjx\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.082087 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7tng\" (UniqueName: \"kubernetes.io/projected/79198f63-420b-43d9-b3a1-bf017d820757-kube-api-access-n7tng\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.087271 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fc5pz"] Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.095145 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv"] Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.102317 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv"] Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.185943 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.207345 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.407069 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-567446f66d-rb24c"] Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.468986 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg"] Feb 02 00:14:43 crc kubenswrapper[5108]: W0202 00:14:43.478470 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeda4dd1_4f20_4369_bafc_0ac6eb8e8f6b.slice/crio-afcac4ec6438b6ba3bf2cfd787ad93083aa7277c7f6047771319ebb5e3cd2d60 WatchSource:0}: Error finding container afcac4ec6438b6ba3bf2cfd787ad93083aa7277c7f6047771319ebb5e3cd2d60: Status 404 returned error can't find the container with id afcac4ec6438b6ba3bf2cfd787ad93083aa7277c7f6047771319ebb5e3cd2d60 Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.566131 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" path="/var/lib/kubelet/pods/c6bb9533-ef42-4cf1-92de-3a011b1934b8/volumes" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.567297 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" path="/var/lib/kubelet/pods/ebaf16ae-d4df-42da-a1b5-03495d1ef713/volumes" Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.010397 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" event={"ID":"79198f63-420b-43d9-b3a1-bf017d820757","Type":"ContainerStarted","Data":"24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613"} Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.010745 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.010757 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" event={"ID":"79198f63-420b-43d9-b3a1-bf017d820757","Type":"ContainerStarted","Data":"58ccd3c5158422578e61b7d7f4b1bdfac6ed4226edc2df1bcf366f305ad50537"} Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.013014 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" event={"ID":"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b","Type":"ContainerStarted","Data":"0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14"} Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.013054 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" event={"ID":"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b","Type":"ContainerStarted","Data":"afcac4ec6438b6ba3bf2cfd787ad93083aa7277c7f6047771319ebb5e3cd2d60"} Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.013277 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.031183 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" podStartSLOduration=2.031169672 podStartE2EDuration="2.031169672s" podCreationTimestamp="2026-02-02 00:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:14:44.029450076 +0000 UTC m=+283.304947016" watchObservedRunningTime="2026-02-02 00:14:44.031169672 +0000 UTC m=+283.306666592" Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.056510 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" podStartSLOduration=2.056493359 podStartE2EDuration="2.056493359s" podCreationTimestamp="2026-02-02 00:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:14:44.05390334 +0000 UTC m=+283.329400290" watchObservedRunningTime="2026-02-02 00:14:44.056493359 +0000 UTC m=+283.331990289" Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.486637 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.622385 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.672628 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-gr7jw"] Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.788782 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-gr7jw"] Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.788953 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.941490 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942703 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-registry-tls\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942779 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b620522-8e7c-4ff5-b88f-658a64778055-trusted-ca\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942800 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2b620522-8e7c-4ff5-b88f-658a64778055-registry-certificates\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942840 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-bound-sa-token\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942857 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2b620522-8e7c-4ff5-b88f-658a64778055-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942873 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2b620522-8e7c-4ff5-b88f-658a64778055-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942903 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26zw\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-kube-api-access-x26zw\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.964587 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.043965 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-bound-sa-token\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.044019 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2b620522-8e7c-4ff5-b88f-658a64778055-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.044044 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2b620522-8e7c-4ff5-b88f-658a64778055-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.044083 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x26zw\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-kube-api-access-x26zw\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.044137 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-registry-tls\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.044186 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b620522-8e7c-4ff5-b88f-658a64778055-trusted-ca\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.044211 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2b620522-8e7c-4ff5-b88f-658a64778055-registry-certificates\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.045504 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2b620522-8e7c-4ff5-b88f-658a64778055-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.046062 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b620522-8e7c-4ff5-b88f-658a64778055-trusted-ca\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.046551 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2b620522-8e7c-4ff5-b88f-658a64778055-registry-certificates\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.051984 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-registry-tls\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.053379 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2b620522-8e7c-4ff5-b88f-658a64778055-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.064180 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x26zw\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-kube-api-access-x26zw\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.074030 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-bound-sa-token\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.112213 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.309156 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-gr7jw"] Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.906840 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56384: no serving certificate available for the kubelet" Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.063805 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" event={"ID":"2b620522-8e7c-4ff5-b88f-658a64778055","Type":"ContainerStarted","Data":"303ec1f9caf3151304e6616bbfad983b04e1c158e69967056463655a668a4260"} Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.063857 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" event={"ID":"2b620522-8e7c-4ff5-b88f-658a64778055","Type":"ContainerStarted","Data":"fd0cb1e01a3d1efcdad86229ce823d9f2a11d654fb84184416fde311614bf895"} Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.064132 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.081486 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" podStartSLOduration=2.081466981 podStartE2EDuration="2.081466981s" podCreationTimestamp="2026-02-02 00:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:14:50.077778563 +0000 UTC m=+289.353275513" watchObservedRunningTime="2026-02-02 00:14:50.081466981 +0000 UTC m=+289.356963911" Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.919861 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.920273 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.920341 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.921075 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fc8656729a54679c3362014ce0e7b635c6707581fd8f75d82363290e04cf73f"} pod="openshift-machine-config-operator/machine-config-daemon-d74m7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.921213 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" containerID="cri-o://7fc8656729a54679c3362014ce0e7b635c6707581fd8f75d82363290e04cf73f" gracePeriod=600 Feb 02 00:14:51 crc kubenswrapper[5108]: I0202 00:14:51.073873 5108 generic.go:358] "Generic (PLEG): container finished" podID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerID="7fc8656729a54679c3362014ce0e7b635c6707581fd8f75d82363290e04cf73f" exitCode=0 Feb 02 00:14:51 crc kubenswrapper[5108]: I0202 00:14:51.073991 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerDied","Data":"7fc8656729a54679c3362014ce0e7b635c6707581fd8f75d82363290e04cf73f"} Feb 02 00:14:52 crc kubenswrapper[5108]: I0202 00:14:52.082636 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"0e2568caf741572a83d3d444d4f4d6722d2e6e9a09c71f1dec22c400db69da1e"} Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.169884 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk"] Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.194499 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.198447 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.200165 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.216381 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk"] Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.298677 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/108138a6-cd12-40d8-be19-580628ff3407-secret-volume\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.298771 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npgw6\" (UniqueName: \"kubernetes.io/projected/108138a6-cd12-40d8-be19-580628ff3407-kube-api-access-npgw6\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.298858 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/108138a6-cd12-40d8-be19-580628ff3407-config-volume\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.401908 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/108138a6-cd12-40d8-be19-580628ff3407-secret-volume\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.401977 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-npgw6\" (UniqueName: \"kubernetes.io/projected/108138a6-cd12-40d8-be19-580628ff3407-kube-api-access-npgw6\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.402104 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/108138a6-cd12-40d8-be19-580628ff3407-config-volume\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.403948 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/108138a6-cd12-40d8-be19-580628ff3407-config-volume\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.414291 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/108138a6-cd12-40d8-be19-580628ff3407-secret-volume\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.421769 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-npgw6\" (UniqueName: \"kubernetes.io/projected/108138a6-cd12-40d8-be19-580628ff3407-kube-api-access-npgw6\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.526337 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.928691 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk"] Feb 02 00:15:00 crc kubenswrapper[5108]: W0202 00:15:00.936928 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod108138a6_cd12_40d8_be19_580628ff3407.slice/crio-98d6499e8eabc175d98097137368aeeb30eef1a96b9954ece3a0ab1e76e359f9 WatchSource:0}: Error finding container 98d6499e8eabc175d98097137368aeeb30eef1a96b9954ece3a0ab1e76e359f9: Status 404 returned error can't find the container with id 98d6499e8eabc175d98097137368aeeb30eef1a96b9954ece3a0ab1e76e359f9 Feb 02 00:15:01 crc kubenswrapper[5108]: I0202 00:15:01.157515 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" event={"ID":"108138a6-cd12-40d8-be19-580628ff3407","Type":"ContainerStarted","Data":"98d6499e8eabc175d98097137368aeeb30eef1a96b9954ece3a0ab1e76e359f9"} Feb 02 00:15:01 crc kubenswrapper[5108]: I0202 00:15:01.741880 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:15:01 crc kubenswrapper[5108]: I0202 00:15:01.743329 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.170184 5108 generic.go:358] "Generic (PLEG): container finished" podID="108138a6-cd12-40d8-be19-580628ff3407" containerID="ad8d695e762a2c513b0dc9d2445c1f0ed0b7ba50992f69b8964360c32e2952c9" exitCode=0 Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.170447 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" event={"ID":"108138a6-cd12-40d8-be19-580628ff3407","Type":"ContainerDied","Data":"ad8d695e762a2c513b0dc9d2445c1f0ed0b7ba50992f69b8964360c32e2952c9"} Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.269459 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-567446f66d-rb24c"] Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.269885 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" podUID="79198f63-420b-43d9-b3a1-bf017d820757" containerName="controller-manager" containerID="cri-o://24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613" gracePeriod=30 Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.293333 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg"] Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.293771 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" podUID="feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" containerName="route-controller-manager" containerID="cri-o://0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14" gracePeriod=30 Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.885953 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.916682 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g"] Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.917427 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" containerName="route-controller-manager" Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.917447 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" containerName="route-controller-manager" Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.917535 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" containerName="route-controller-manager" Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.957413 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g"] Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.957597 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.040601 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qwjx\" (UniqueName: \"kubernetes.io/projected/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-kube-api-access-4qwjx\") pod \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.040663 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-client-ca\") pod \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.040755 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-config\") pod \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.040801 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-serving-cert\") pod \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.040836 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-tmp\") pod \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.041424 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-client-ca" (OuterVolumeSpecName: "client-ca") pod "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" (UID: "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.041485 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-tmp" (OuterVolumeSpecName: "tmp") pod "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" (UID: "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.041643 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-config" (OuterVolumeSpecName: "config") pod "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" (UID: "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.049491 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-kube-api-access-4qwjx" (OuterVolumeSpecName: "kube-api-access-4qwjx") pod "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" (UID: "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b"). InnerVolumeSpecName "kube-api-access-4qwjx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.052164 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" (UID: "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.137378 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142048 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf79def-e801-4283-9dcf-dc94d07e4ce7-tmp\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142132 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbngd\" (UniqueName: \"kubernetes.io/projected/6bf79def-e801-4283-9dcf-dc94d07e4ce7-kube-api-access-zbngd\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142170 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf79def-e801-4283-9dcf-dc94d07e4ce7-serving-cert\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142315 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-config\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142468 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-client-ca\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142635 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4qwjx\" (UniqueName: \"kubernetes.io/projected/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-kube-api-access-4qwjx\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142668 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142684 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142698 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142710 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.174524 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg"] Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.175466 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="79198f63-420b-43d9-b3a1-bf017d820757" containerName="controller-manager" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.175493 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="79198f63-420b-43d9-b3a1-bf017d820757" containerName="controller-manager" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.175648 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="79198f63-420b-43d9-b3a1-bf017d820757" containerName="controller-manager" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.179544 5108 generic.go:358] "Generic (PLEG): container finished" podID="79198f63-420b-43d9-b3a1-bf017d820757" containerID="24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613" exitCode=0 Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.181425 5108 generic.go:358] "Generic (PLEG): container finished" podID="feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" containerID="0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14" exitCode=0 Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244098 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7tng\" (UniqueName: \"kubernetes.io/projected/79198f63-420b-43d9-b3a1-bf017d820757-kube-api-access-n7tng\") pod \"79198f63-420b-43d9-b3a1-bf017d820757\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244194 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79198f63-420b-43d9-b3a1-bf017d820757-serving-cert\") pod \"79198f63-420b-43d9-b3a1-bf017d820757\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244256 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-proxy-ca-bundles\") pod \"79198f63-420b-43d9-b3a1-bf017d820757\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244396 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/79198f63-420b-43d9-b3a1-bf017d820757-tmp\") pod \"79198f63-420b-43d9-b3a1-bf017d820757\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244426 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-client-ca\") pod \"79198f63-420b-43d9-b3a1-bf017d820757\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244454 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-config\") pod \"79198f63-420b-43d9-b3a1-bf017d820757\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244576 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-config\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244622 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-client-ca\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244672 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf79def-e801-4283-9dcf-dc94d07e4ce7-tmp\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244776 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zbngd\" (UniqueName: \"kubernetes.io/projected/6bf79def-e801-4283-9dcf-dc94d07e4ce7-kube-api-access-zbngd\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244833 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf79def-e801-4283-9dcf-dc94d07e4ce7-serving-cert\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244983 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79198f63-420b-43d9-b3a1-bf017d820757-tmp" (OuterVolumeSpecName: "tmp") pod "79198f63-420b-43d9-b3a1-bf017d820757" (UID: "79198f63-420b-43d9-b3a1-bf017d820757"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.245155 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "79198f63-420b-43d9-b3a1-bf017d820757" (UID: "79198f63-420b-43d9-b3a1-bf017d820757"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.245196 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-client-ca" (OuterVolumeSpecName: "client-ca") pod "79198f63-420b-43d9-b3a1-bf017d820757" (UID: "79198f63-420b-43d9-b3a1-bf017d820757"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.245512 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf79def-e801-4283-9dcf-dc94d07e4ce7-tmp\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.245979 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-client-ca\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.246186 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-config" (OuterVolumeSpecName: "config") pod "79198f63-420b-43d9-b3a1-bf017d820757" (UID: "79198f63-420b-43d9-b3a1-bf017d820757"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.246232 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-config\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.247514 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79198f63-420b-43d9-b3a1-bf017d820757-kube-api-access-n7tng" (OuterVolumeSpecName: "kube-api-access-n7tng") pod "79198f63-420b-43d9-b3a1-bf017d820757" (UID: "79198f63-420b-43d9-b3a1-bf017d820757"). InnerVolumeSpecName "kube-api-access-n7tng". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248367 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg"] Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248398 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" event={"ID":"79198f63-420b-43d9-b3a1-bf017d820757","Type":"ContainerDied","Data":"24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613"} Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248428 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" event={"ID":"79198f63-420b-43d9-b3a1-bf017d820757","Type":"ContainerDied","Data":"58ccd3c5158422578e61b7d7f4b1bdfac6ed4226edc2df1bcf366f305ad50537"} Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248446 5108 scope.go:117] "RemoveContainer" containerID="24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248458 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248491 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248613 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" event={"ID":"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b","Type":"ContainerDied","Data":"0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14"} Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248671 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" event={"ID":"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b","Type":"ContainerDied","Data":"afcac4ec6438b6ba3bf2cfd787ad93083aa7277c7f6047771319ebb5e3cd2d60"} Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.249181 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.249821 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79198f63-420b-43d9-b3a1-bf017d820757-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "79198f63-420b-43d9-b3a1-bf017d820757" (UID: "79198f63-420b-43d9-b3a1-bf017d820757"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.250826 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf79def-e801-4283-9dcf-dc94d07e4ce7-serving-cert\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.270881 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbngd\" (UniqueName: \"kubernetes.io/projected/6bf79def-e801-4283-9dcf-dc94d07e4ce7-kube-api-access-zbngd\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.274949 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.288665 5108 scope.go:117] "RemoveContainer" containerID="24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613" Feb 02 00:15:03 crc kubenswrapper[5108]: E0202 00:15:03.289601 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613\": container with ID starting with 24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613 not found: ID does not exist" containerID="24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.289650 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613"} err="failed to get container status \"24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613\": rpc error: code = NotFound desc = could not find container \"24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613\": container with ID starting with 24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613 not found: ID does not exist" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.289685 5108 scope.go:117] "RemoveContainer" containerID="0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.303434 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg"] Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.308307 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg"] Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.324870 5108 scope.go:117] "RemoveContainer" containerID="0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14" Feb 02 00:15:03 crc kubenswrapper[5108]: E0202 00:15:03.325425 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14\": container with ID starting with 0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14 not found: ID does not exist" containerID="0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.325493 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14"} err="failed to get container status \"0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14\": rpc error: code = NotFound desc = could not find container \"0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14\": container with ID starting with 0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14 not found: ID does not exist" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.346218 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-config\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.346740 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw4fc\" (UniqueName: \"kubernetes.io/projected/29e53688-b891-48f3-a8ac-3b2843a5a8bd-kube-api-access-tw4fc\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.346846 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-client-ca\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.346967 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29e53688-b891-48f3-a8ac-3b2843a5a8bd-tmp\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347057 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-proxy-ca-bundles\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347360 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e53688-b891-48f3-a8ac-3b2843a5a8bd-serving-cert\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347616 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/79198f63-420b-43d9-b3a1-bf017d820757-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347636 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347647 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347657 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n7tng\" (UniqueName: \"kubernetes.io/projected/79198f63-420b-43d9-b3a1-bf017d820757-kube-api-access-n7tng\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347667 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79198f63-420b-43d9-b3a1-bf017d820757-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347676 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.449016 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-proxy-ca-bundles\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.449098 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e53688-b891-48f3-a8ac-3b2843a5a8bd-serving-cert\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.449166 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-config\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.449194 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tw4fc\" (UniqueName: \"kubernetes.io/projected/29e53688-b891-48f3-a8ac-3b2843a5a8bd-kube-api-access-tw4fc\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.449221 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-client-ca\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.449474 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29e53688-b891-48f3-a8ac-3b2843a5a8bd-tmp\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.450411 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29e53688-b891-48f3-a8ac-3b2843a5a8bd-tmp\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.450816 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-proxy-ca-bundles\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.450971 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-client-ca\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.456514 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e53688-b891-48f3-a8ac-3b2843a5a8bd-serving-cert\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.459564 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-config\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.476417 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw4fc\" (UniqueName: \"kubernetes.io/projected/29e53688-b891-48f3-a8ac-3b2843a5a8bd-kube-api-access-tw4fc\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.503958 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.564454 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" path="/var/lib/kubelet/pods/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b/volumes" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.574472 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g"] Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.577917 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-567446f66d-rb24c"] Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.580998 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-567446f66d-rb24c"] Feb 02 00:15:03 crc kubenswrapper[5108]: W0202 00:15:03.582533 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bf79def_e801_4283_9dcf_dc94d07e4ce7.slice/crio-95340582a5d80262d0b4bed25729f485b6b81519ce917f8cca0b750a62777415 WatchSource:0}: Error finding container 95340582a5d80262d0b4bed25729f485b6b81519ce917f8cca0b750a62777415: Status 404 returned error can't find the container with id 95340582a5d80262d0b4bed25729f485b6b81519ce917f8cca0b750a62777415 Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.585452 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.586376 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.653873 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npgw6\" (UniqueName: \"kubernetes.io/projected/108138a6-cd12-40d8-be19-580628ff3407-kube-api-access-npgw6\") pod \"108138a6-cd12-40d8-be19-580628ff3407\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.654135 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/108138a6-cd12-40d8-be19-580628ff3407-secret-volume\") pod \"108138a6-cd12-40d8-be19-580628ff3407\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.654202 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/108138a6-cd12-40d8-be19-580628ff3407-config-volume\") pod \"108138a6-cd12-40d8-be19-580628ff3407\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.655291 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/108138a6-cd12-40d8-be19-580628ff3407-config-volume" (OuterVolumeSpecName: "config-volume") pod "108138a6-cd12-40d8-be19-580628ff3407" (UID: "108138a6-cd12-40d8-be19-580628ff3407"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.660607 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108138a6-cd12-40d8-be19-580628ff3407-kube-api-access-npgw6" (OuterVolumeSpecName: "kube-api-access-npgw6") pod "108138a6-cd12-40d8-be19-580628ff3407" (UID: "108138a6-cd12-40d8-be19-580628ff3407"). InnerVolumeSpecName "kube-api-access-npgw6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.660697 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108138a6-cd12-40d8-be19-580628ff3407-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "108138a6-cd12-40d8-be19-580628ff3407" (UID: "108138a6-cd12-40d8-be19-580628ff3407"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.755848 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/108138a6-cd12-40d8-be19-580628ff3407-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.755907 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/108138a6-cd12-40d8-be19-580628ff3407-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.755919 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-npgw6\" (UniqueName: \"kubernetes.io/projected/108138a6-cd12-40d8-be19-580628ff3407-kube-api-access-npgw6\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.984926 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg"] Feb 02 00:15:03 crc kubenswrapper[5108]: W0202 00:15:03.995003 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29e53688_b891_48f3_a8ac_3b2843a5a8bd.slice/crio-68c081537859e48cac0d70a4fcd8ca0ff164c7eec35922d09962d3b0f66e08de WatchSource:0}: Error finding container 68c081537859e48cac0d70a4fcd8ca0ff164c7eec35922d09962d3b0f66e08de: Status 404 returned error can't find the container with id 68c081537859e48cac0d70a4fcd8ca0ff164c7eec35922d09962d3b0f66e08de Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.221760 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.223450 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" event={"ID":"108138a6-cd12-40d8-be19-580628ff3407","Type":"ContainerDied","Data":"98d6499e8eabc175d98097137368aeeb30eef1a96b9954ece3a0ab1e76e359f9"} Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.223503 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d6499e8eabc175d98097137368aeeb30eef1a96b9954ece3a0ab1e76e359f9" Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.228206 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" event={"ID":"6bf79def-e801-4283-9dcf-dc94d07e4ce7","Type":"ContainerStarted","Data":"9d06c8fe1744806b6a7cb930eefb05bbfcb5ace06fee7045171fa1b68f0f3ded"} Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.228408 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" event={"ID":"6bf79def-e801-4283-9dcf-dc94d07e4ce7","Type":"ContainerStarted","Data":"95340582a5d80262d0b4bed25729f485b6b81519ce917f8cca0b750a62777415"} Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.229079 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.234129 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" event={"ID":"29e53688-b891-48f3-a8ac-3b2843a5a8bd","Type":"ContainerStarted","Data":"68c081537859e48cac0d70a4fcd8ca0ff164c7eec35922d09962d3b0f66e08de"} Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.266326 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" podStartSLOduration=2.266309517 podStartE2EDuration="2.266309517s" podCreationTimestamp="2026-02-02 00:15:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:04.26418826 +0000 UTC m=+303.539685200" watchObservedRunningTime="2026-02-02 00:15:04.266309517 +0000 UTC m=+303.541806447" Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.606423 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:05 crc kubenswrapper[5108]: I0202 00:15:05.241469 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" event={"ID":"29e53688-b891-48f3-a8ac-3b2843a5a8bd","Type":"ContainerStarted","Data":"5486a5369ee6807c8ca56ed6196786f4085e1c979dbbd30a3ffa6238270af407"} Feb 02 00:15:05 crc kubenswrapper[5108]: I0202 00:15:05.261842 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" podStartSLOduration=3.261816139 podStartE2EDuration="3.261816139s" podCreationTimestamp="2026-02-02 00:15:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:05.25771983 +0000 UTC m=+304.533216830" watchObservedRunningTime="2026-02-02 00:15:05.261816139 +0000 UTC m=+304.537313109" Feb 02 00:15:05 crc kubenswrapper[5108]: I0202 00:15:05.569299 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79198f63-420b-43d9-b3a1-bf017d820757" path="/var/lib/kubelet/pods/79198f63-420b-43d9-b3a1-bf017d820757/volumes" Feb 02 00:15:06 crc kubenswrapper[5108]: I0202 00:15:06.249160 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:06 crc kubenswrapper[5108]: I0202 00:15:06.257438 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:10 crc kubenswrapper[5108]: I0202 00:15:10.424755 5108 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 00:15:11 crc kubenswrapper[5108]: I0202 00:15:11.083543 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:15:11 crc kubenswrapper[5108]: I0202 00:15:11.145937 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mjr86"] Feb 02 00:15:22 crc kubenswrapper[5108]: I0202 00:15:22.256912 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg"] Feb 02 00:15:22 crc kubenswrapper[5108]: I0202 00:15:22.257684 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" podUID="29e53688-b891-48f3-a8ac-3b2843a5a8bd" containerName="controller-manager" containerID="cri-o://5486a5369ee6807c8ca56ed6196786f4085e1c979dbbd30a3ffa6238270af407" gracePeriod=30 Feb 02 00:15:22 crc kubenswrapper[5108]: I0202 00:15:22.282055 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g"] Feb 02 00:15:22 crc kubenswrapper[5108]: I0202 00:15:22.282420 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" podUID="6bf79def-e801-4283-9dcf-dc94d07e4ce7" containerName="route-controller-manager" containerID="cri-o://9d06c8fe1744806b6a7cb930eefb05bbfcb5ace06fee7045171fa1b68f0f3ded" gracePeriod=30 Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.384814 5108 generic.go:358] "Generic (PLEG): container finished" podID="29e53688-b891-48f3-a8ac-3b2843a5a8bd" containerID="5486a5369ee6807c8ca56ed6196786f4085e1c979dbbd30a3ffa6238270af407" exitCode=0 Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.385525 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" event={"ID":"29e53688-b891-48f3-a8ac-3b2843a5a8bd","Type":"ContainerDied","Data":"5486a5369ee6807c8ca56ed6196786f4085e1c979dbbd30a3ffa6238270af407"} Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.387394 5108 generic.go:358] "Generic (PLEG): container finished" podID="6bf79def-e801-4283-9dcf-dc94d07e4ce7" containerID="9d06c8fe1744806b6a7cb930eefb05bbfcb5ace06fee7045171fa1b68f0f3ded" exitCode=0 Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.387429 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" event={"ID":"6bf79def-e801-4283-9dcf-dc94d07e4ce7","Type":"ContainerDied","Data":"9d06c8fe1744806b6a7cb930eefb05bbfcb5ace06fee7045171fa1b68f0f3ded"} Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.540683 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.574139 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65678dd567-lql72"] Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.574701 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29e53688-b891-48f3-a8ac-3b2843a5a8bd" containerName="controller-manager" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.574721 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e53688-b891-48f3-a8ac-3b2843a5a8bd" containerName="controller-manager" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.574743 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="108138a6-cd12-40d8-be19-580628ff3407" containerName="collect-profiles" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.574939 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="108138a6-cd12-40d8-be19-580628ff3407" containerName="collect-profiles" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.575042 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="108138a6-cd12-40d8-be19-580628ff3407" containerName="collect-profiles" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.575055 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="29e53688-b891-48f3-a8ac-3b2843a5a8bd" containerName="controller-manager" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.644589 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw4fc\" (UniqueName: \"kubernetes.io/projected/29e53688-b891-48f3-a8ac-3b2843a5a8bd-kube-api-access-tw4fc\") pod \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.644667 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-client-ca\") pod \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.644697 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e53688-b891-48f3-a8ac-3b2843a5a8bd-serving-cert\") pod \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.644734 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-config\") pod \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.644752 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-proxy-ca-bundles\") pod \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.644815 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29e53688-b891-48f3-a8ac-3b2843a5a8bd-tmp\") pod \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.646140 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e53688-b891-48f3-a8ac-3b2843a5a8bd-tmp" (OuterVolumeSpecName: "tmp") pod "29e53688-b891-48f3-a8ac-3b2843a5a8bd" (UID: "29e53688-b891-48f3-a8ac-3b2843a5a8bd"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.646161 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-client-ca" (OuterVolumeSpecName: "client-ca") pod "29e53688-b891-48f3-a8ac-3b2843a5a8bd" (UID: "29e53688-b891-48f3-a8ac-3b2843a5a8bd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.646204 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-config" (OuterVolumeSpecName: "config") pod "29e53688-b891-48f3-a8ac-3b2843a5a8bd" (UID: "29e53688-b891-48f3-a8ac-3b2843a5a8bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.646388 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "29e53688-b891-48f3-a8ac-3b2843a5a8bd" (UID: "29e53688-b891-48f3-a8ac-3b2843a5a8bd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.654395 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e53688-b891-48f3-a8ac-3b2843a5a8bd-kube-api-access-tw4fc" (OuterVolumeSpecName: "kube-api-access-tw4fc") pod "29e53688-b891-48f3-a8ac-3b2843a5a8bd" (UID: "29e53688-b891-48f3-a8ac-3b2843a5a8bd"). InnerVolumeSpecName "kube-api-access-tw4fc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.654542 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e53688-b891-48f3-a8ac-3b2843a5a8bd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "29e53688-b891-48f3-a8ac-3b2843a5a8bd" (UID: "29e53688-b891-48f3-a8ac-3b2843a5a8bd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.720929 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65678dd567-lql72"] Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.721245 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.746572 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tw4fc\" (UniqueName: \"kubernetes.io/projected/29e53688-b891-48f3-a8ac-3b2843a5a8bd-kube-api-access-tw4fc\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.746599 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.746612 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e53688-b891-48f3-a8ac-3b2843a5a8bd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.746641 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.746651 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.746661 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29e53688-b891-48f3-a8ac-3b2843a5a8bd-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.851386 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d8873e-3275-40a4-987d-a8d2f5489461-serving-cert\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.851445 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-client-ca\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.851469 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-config\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.851490 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77d8873e-3275-40a4-987d-a8d2f5489461-tmp\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.851535 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-proxy-ca-bundles\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.851561 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4kx5\" (UniqueName: \"kubernetes.io/projected/77d8873e-3275-40a4-987d-a8d2f5489461-kube-api-access-w4kx5\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.872422 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.908958 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx"] Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.910119 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6bf79def-e801-4283-9dcf-dc94d07e4ce7" containerName="route-controller-manager" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.910159 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf79def-e801-4283-9dcf-dc94d07e4ce7" containerName="route-controller-manager" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.910476 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="6bf79def-e801-4283-9dcf-dc94d07e4ce7" containerName="route-controller-manager" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.952797 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-proxy-ca-bundles\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.952849 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4kx5\" (UniqueName: \"kubernetes.io/projected/77d8873e-3275-40a4-987d-a8d2f5489461-kube-api-access-w4kx5\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.952915 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d8873e-3275-40a4-987d-a8d2f5489461-serving-cert\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.953395 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-client-ca\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.953435 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-config\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.953463 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77d8873e-3275-40a4-987d-a8d2f5489461-tmp\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.953999 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77d8873e-3275-40a4-987d-a8d2f5489461-tmp\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.954112 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-proxy-ca-bundles\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.954350 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-client-ca\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.955254 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-config\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.959023 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d8873e-3275-40a4-987d-a8d2f5489461-serving-cert\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.962257 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx"] Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.962404 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.968608 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4kx5\" (UniqueName: \"kubernetes.io/projected/77d8873e-3275-40a4-987d-a8d2f5489461-kube-api-access-w4kx5\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.052629 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056307 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-config\") pod \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056340 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf79def-e801-4283-9dcf-dc94d07e4ce7-serving-cert\") pod \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056404 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf79def-e801-4283-9dcf-dc94d07e4ce7-tmp\") pod \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056430 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbngd\" (UniqueName: \"kubernetes.io/projected/6bf79def-e801-4283-9dcf-dc94d07e4ce7-kube-api-access-zbngd\") pod \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056509 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-client-ca\") pod \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056634 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-config\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056688 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-client-ca\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056708 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36503b52-c5de-4acc-9b2d-4b006a58c586-tmp\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056739 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36503b52-c5de-4acc-9b2d-4b006a58c586-serving-cert\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056772 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5gmn\" (UniqueName: \"kubernetes.io/projected/36503b52-c5de-4acc-9b2d-4b006a58c586-kube-api-access-q5gmn\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.058036 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf79def-e801-4283-9dcf-dc94d07e4ce7-tmp" (OuterVolumeSpecName: "tmp") pod "6bf79def-e801-4283-9dcf-dc94d07e4ce7" (UID: "6bf79def-e801-4283-9dcf-dc94d07e4ce7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.058100 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-client-ca" (OuterVolumeSpecName: "client-ca") pod "6bf79def-e801-4283-9dcf-dc94d07e4ce7" (UID: "6bf79def-e801-4283-9dcf-dc94d07e4ce7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.058242 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-config" (OuterVolumeSpecName: "config") pod "6bf79def-e801-4283-9dcf-dc94d07e4ce7" (UID: "6bf79def-e801-4283-9dcf-dc94d07e4ce7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.061390 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf79def-e801-4283-9dcf-dc94d07e4ce7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6bf79def-e801-4283-9dcf-dc94d07e4ce7" (UID: "6bf79def-e801-4283-9dcf-dc94d07e4ce7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.063898 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf79def-e801-4283-9dcf-dc94d07e4ce7-kube-api-access-zbngd" (OuterVolumeSpecName: "kube-api-access-zbngd") pod "6bf79def-e801-4283-9dcf-dc94d07e4ce7" (UID: "6bf79def-e801-4283-9dcf-dc94d07e4ce7"). InnerVolumeSpecName "kube-api-access-zbngd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.157755 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5gmn\" (UniqueName: \"kubernetes.io/projected/36503b52-c5de-4acc-9b2d-4b006a58c586-kube-api-access-q5gmn\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.157850 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-config\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.157913 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-client-ca\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.157941 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36503b52-c5de-4acc-9b2d-4b006a58c586-tmp\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.158003 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36503b52-c5de-4acc-9b2d-4b006a58c586-serving-cert\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.158051 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zbngd\" (UniqueName: \"kubernetes.io/projected/6bf79def-e801-4283-9dcf-dc94d07e4ce7-kube-api-access-zbngd\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.158065 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.158080 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.158095 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf79def-e801-4283-9dcf-dc94d07e4ce7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.158110 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf79def-e801-4283-9dcf-dc94d07e4ce7-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.159452 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36503b52-c5de-4acc-9b2d-4b006a58c586-tmp\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.160113 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-client-ca\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.160744 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-config\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.165584 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36503b52-c5de-4acc-9b2d-4b006a58c586-serving-cert\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.177673 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5gmn\" (UniqueName: \"kubernetes.io/projected/36503b52-c5de-4acc-9b2d-4b006a58c586-kube-api-access-q5gmn\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.275478 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.397900 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.400599 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" event={"ID":"29e53688-b891-48f3-a8ac-3b2843a5a8bd","Type":"ContainerDied","Data":"68c081537859e48cac0d70a4fcd8ca0ff164c7eec35922d09962d3b0f66e08de"} Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.400675 5108 scope.go:117] "RemoveContainer" containerID="5486a5369ee6807c8ca56ed6196786f4085e1c979dbbd30a3ffa6238270af407" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.404630 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" event={"ID":"6bf79def-e801-4283-9dcf-dc94d07e4ce7","Type":"ContainerDied","Data":"95340582a5d80262d0b4bed25729f485b6b81519ce917f8cca0b750a62777415"} Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.404722 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.427926 5108 scope.go:117] "RemoveContainer" containerID="9d06c8fe1744806b6a7cb930eefb05bbfcb5ace06fee7045171fa1b68f0f3ded" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.434685 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg"] Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.441035 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg"] Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.445364 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g"] Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.450720 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g"] Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.486633 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx"] Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.486690 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65678dd567-lql72"] Feb 02 00:15:24 crc kubenswrapper[5108]: W0202 00:15:24.493577 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77d8873e_3275_40a4_987d_a8d2f5489461.slice/crio-cbbb9d530c606d7c20d199a0daee8fc2b7af8b3c2f71306efb862a8569b37212 WatchSource:0}: Error finding container cbbb9d530c606d7c20d199a0daee8fc2b7af8b3c2f71306efb862a8569b37212: Status 404 returned error can't find the container with id cbbb9d530c606d7c20d199a0daee8fc2b7af8b3c2f71306efb862a8569b37212 Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.411590 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" event={"ID":"77d8873e-3275-40a4-987d-a8d2f5489461","Type":"ContainerStarted","Data":"6d76ca120eed12f5955fe4993b5e130be9e960cdb6b5ad865d61be03b84b9de0"} Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.411636 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" event={"ID":"77d8873e-3275-40a4-987d-a8d2f5489461","Type":"ContainerStarted","Data":"cbbb9d530c606d7c20d199a0daee8fc2b7af8b3c2f71306efb862a8569b37212"} Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.412041 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.414578 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" event={"ID":"36503b52-c5de-4acc-9b2d-4b006a58c586","Type":"ContainerStarted","Data":"51422b9b14c5e121e52c764cd05f2c885e1a9040876867e3b6e98ed49215c05a"} Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.414642 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" event={"ID":"36503b52-c5de-4acc-9b2d-4b006a58c586","Type":"ContainerStarted","Data":"bd55002ad86a550361e62870063a3fae4c4e9cc5bee2e68716b86baa8fdcd306"} Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.414946 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.437662 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" podStartSLOduration=3.437632329 podStartE2EDuration="3.437632329s" podCreationTimestamp="2026-02-02 00:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:25.4324713 +0000 UTC m=+324.707968250" watchObservedRunningTime="2026-02-02 00:15:25.437632329 +0000 UTC m=+324.713129259" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.592602 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e53688-b891-48f3-a8ac-3b2843a5a8bd" path="/var/lib/kubelet/pods/29e53688-b891-48f3-a8ac-3b2843a5a8bd/volumes" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.594810 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf79def-e801-4283-9dcf-dc94d07e4ce7" path="/var/lib/kubelet/pods/6bf79def-e801-4283-9dcf-dc94d07e4ce7/volumes" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.595737 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.630926 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" podStartSLOduration=3.630904513 podStartE2EDuration="3.630904513s" podCreationTimestamp="2026-02-02 00:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:25.453995126 +0000 UTC m=+324.729492086" watchObservedRunningTime="2026-02-02 00:15:25.630904513 +0000 UTC m=+324.906401443" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.818326 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.058580 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-52cvp"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.060817 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-52cvp" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="registry-server" containerID="cri-o://44c29c35f3f042606025783238fe84449fa274df709647a8bb2c6f5b25f6ea6a" gracePeriod=30 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.071147 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8l8nm"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.071569 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8l8nm" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="registry-server" containerID="cri-o://0df55c9f0ebaec40aacdfbba7ebb6e0073cb9d22b3cdc2120d6cd95d09159f3c" gracePeriod=30 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.091768 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmvtw"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.092089 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" containerID="cri-o://5a87ce4dbe06f64afb1f619d8b0c573d04b896291877c1eda1d92c83341dfdde" gracePeriod=30 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.110440 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzh6n"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.112531 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wzh6n" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="registry-server" containerID="cri-o://7027daeb8294c638005dbc109971ebb173c299ff05d37653d85c7855028e63bd" gracePeriod=30 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.122343 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4h5k"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.123861 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g4h5k" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="registry-server" containerID="cri-o://3f0b7cceb8942beae974160beea654ece1ffcbdf5f51cb46e2bcafac40dd76f7" gracePeriod=30 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.131388 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-t6j5g"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.148860 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-t6j5g"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.149178 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.244573 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e18aabab-6cfe-4b88-9efd-a44ecbcace87-tmp\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.245104 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e18aabab-6cfe-4b88-9efd-a44ecbcace87-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.245152 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ktf9\" (UniqueName: \"kubernetes.io/projected/e18aabab-6cfe-4b88-9efd-a44ecbcace87-kube-api-access-4ktf9\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.245274 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e18aabab-6cfe-4b88-9efd-a44ecbcace87-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.346710 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e18aabab-6cfe-4b88-9efd-a44ecbcace87-tmp\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.346761 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e18aabab-6cfe-4b88-9efd-a44ecbcace87-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.347044 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4ktf9\" (UniqueName: \"kubernetes.io/projected/e18aabab-6cfe-4b88-9efd-a44ecbcace87-kube-api-access-4ktf9\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.347077 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e18aabab-6cfe-4b88-9efd-a44ecbcace87-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.347420 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e18aabab-6cfe-4b88-9efd-a44ecbcace87-tmp\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.348303 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e18aabab-6cfe-4b88-9efd-a44ecbcace87-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.354630 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e18aabab-6cfe-4b88-9efd-a44ecbcace87-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.366517 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ktf9\" (UniqueName: \"kubernetes.io/projected/e18aabab-6cfe-4b88-9efd-a44ecbcace87-kube-api-access-4ktf9\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.462728 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f60e56b-3881-49ee-be41-5435327c1be3" containerID="5a87ce4dbe06f64afb1f619d8b0c573d04b896291877c1eda1d92c83341dfdde" exitCode=0 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.462926 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" event={"ID":"7f60e56b-3881-49ee-be41-5435327c1be3","Type":"ContainerDied","Data":"5a87ce4dbe06f64afb1f619d8b0c573d04b896291877c1eda1d92c83341dfdde"} Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.463017 5108 scope.go:117] "RemoveContainer" containerID="17a3c312150e2ad187bcb50ece3a0a3479395c7e181149518d0b3bec568dcd5a" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.468473 5108 generic.go:358] "Generic (PLEG): container finished" podID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerID="3f0b7cceb8942beae974160beea654ece1ffcbdf5f51cb46e2bcafac40dd76f7" exitCode=0 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.468632 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerDied","Data":"3f0b7cceb8942beae974160beea654ece1ffcbdf5f51cb46e2bcafac40dd76f7"} Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.473283 5108 generic.go:358] "Generic (PLEG): container finished" podID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerID="44c29c35f3f042606025783238fe84449fa274df709647a8bb2c6f5b25f6ea6a" exitCode=0 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.473407 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52cvp" event={"ID":"ef823528-7549-4a91-83c9-e5b243ecb37c","Type":"ContainerDied","Data":"44c29c35f3f042606025783238fe84449fa274df709647a8bb2c6f5b25f6ea6a"} Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.480534 5108 generic.go:358] "Generic (PLEG): container finished" podID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerID="0df55c9f0ebaec40aacdfbba7ebb6e0073cb9d22b3cdc2120d6cd95d09159f3c" exitCode=0 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.480760 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerDied","Data":"0df55c9f0ebaec40aacdfbba7ebb6e0073cb9d22b3cdc2120d6cd95d09159f3c"} Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.503660 5108 generic.go:358] "Generic (PLEG): container finished" podID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerID="7027daeb8294c638005dbc109971ebb173c299ff05d37653d85c7855028e63bd" exitCode=0 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.503770 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzh6n" event={"ID":"c7a5230e-8980-4561-bfb3-015283fcbaa4","Type":"ContainerDied","Data":"7027daeb8294c638005dbc109971ebb173c299ff05d37653d85c7855028e63bd"} Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.508667 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.523617 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.553065 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content\") pod \"ef823528-7549-4a91-83c9-e5b243ecb37c\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.553173 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-utilities\") pod \"ef823528-7549-4a91-83c9-e5b243ecb37c\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.553315 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7wl9\" (UniqueName: \"kubernetes.io/projected/ef823528-7549-4a91-83c9-e5b243ecb37c-kube-api-access-p7wl9\") pod \"ef823528-7549-4a91-83c9-e5b243ecb37c\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.557017 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-utilities" (OuterVolumeSpecName: "utilities") pod "ef823528-7549-4a91-83c9-e5b243ecb37c" (UID: "ef823528-7549-4a91-83c9-e5b243ecb37c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.562648 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef823528-7549-4a91-83c9-e5b243ecb37c-kube-api-access-p7wl9" (OuterVolumeSpecName: "kube-api-access-p7wl9") pod "ef823528-7549-4a91-83c9-e5b243ecb37c" (UID: "ef823528-7549-4a91-83c9-e5b243ecb37c"). InnerVolumeSpecName "kube-api-access-p7wl9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.590776 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.591702 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.655554 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-operator-metrics\") pod \"7f60e56b-3881-49ee-be41-5435327c1be3\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.656800 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f60e56b-3881-49ee-be41-5435327c1be3-tmp\") pod \"7f60e56b-3881-49ee-be41-5435327c1be3\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.656838 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-catalog-content\") pod \"d1e2eec1-1c52-4e62-b697-b308e89e1377\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.656880 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f7kc\" (UniqueName: \"kubernetes.io/projected/7f60e56b-3881-49ee-be41-5435327c1be3-kube-api-access-9f7kc\") pod \"7f60e56b-3881-49ee-be41-5435327c1be3\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.656891 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef823528-7549-4a91-83c9-e5b243ecb37c" (UID: "ef823528-7549-4a91-83c9-e5b243ecb37c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.656955 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55fbs\" (UniqueName: \"kubernetes.io/projected/d1e2eec1-1c52-4e62-b697-b308e89e1377-kube-api-access-55fbs\") pod \"d1e2eec1-1c52-4e62-b697-b308e89e1377\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.657044 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content\") pod \"ef823528-7549-4a91-83c9-e5b243ecb37c\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.657075 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-trusted-ca\") pod \"7f60e56b-3881-49ee-be41-5435327c1be3\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.657119 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-utilities\") pod \"d1e2eec1-1c52-4e62-b697-b308e89e1377\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.657516 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.657532 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p7wl9\" (UniqueName: \"kubernetes.io/projected/ef823528-7549-4a91-83c9-e5b243ecb37c-kube-api-access-p7wl9\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.657924 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f60e56b-3881-49ee-be41-5435327c1be3-tmp" (OuterVolumeSpecName: "tmp") pod "7f60e56b-3881-49ee-be41-5435327c1be3" (UID: "7f60e56b-3881-49ee-be41-5435327c1be3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: W0202 00:15:28.658719 5108 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ef823528-7549-4a91-83c9-e5b243ecb37c/volumes/kubernetes.io~empty-dir/catalog-content Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.658753 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef823528-7549-4a91-83c9-e5b243ecb37c" (UID: "ef823528-7549-4a91-83c9-e5b243ecb37c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.659143 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7f60e56b-3881-49ee-be41-5435327c1be3" (UID: "7f60e56b-3881-49ee-be41-5435327c1be3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.661157 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-utilities" (OuterVolumeSpecName: "utilities") pod "d1e2eec1-1c52-4e62-b697-b308e89e1377" (UID: "d1e2eec1-1c52-4e62-b697-b308e89e1377"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.662035 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e2eec1-1c52-4e62-b697-b308e89e1377-kube-api-access-55fbs" (OuterVolumeSpecName: "kube-api-access-55fbs") pod "d1e2eec1-1c52-4e62-b697-b308e89e1377" (UID: "d1e2eec1-1c52-4e62-b697-b308e89e1377"). InnerVolumeSpecName "kube-api-access-55fbs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.662365 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f60e56b-3881-49ee-be41-5435327c1be3-kube-api-access-9f7kc" (OuterVolumeSpecName: "kube-api-access-9f7kc") pod "7f60e56b-3881-49ee-be41-5435327c1be3" (UID: "7f60e56b-3881-49ee-be41-5435327c1be3"). InnerVolumeSpecName "kube-api-access-9f7kc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.663445 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7f60e56b-3881-49ee-be41-5435327c1be3" (UID: "7f60e56b-3881-49ee-be41-5435327c1be3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.669925 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.675178 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.731836 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1e2eec1-1c52-4e62-b697-b308e89e1377" (UID: "d1e2eec1-1c52-4e62-b697-b308e89e1377"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.758730 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-catalog-content\") pod \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.758862 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-utilities\") pod \"c7a5230e-8980-4561-bfb3-015283fcbaa4\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.758896 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-utilities\") pod \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.758933 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmw2t\" (UniqueName: \"kubernetes.io/projected/c7a5230e-8980-4561-bfb3-015283fcbaa4-kube-api-access-lmw2t\") pod \"c7a5230e-8980-4561-bfb3-015283fcbaa4\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.758960 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drd6d\" (UniqueName: \"kubernetes.io/projected/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-kube-api-access-drd6d\") pod \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759009 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-catalog-content\") pod \"c7a5230e-8980-4561-bfb3-015283fcbaa4\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759350 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f60e56b-3881-49ee-be41-5435327c1be3-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759373 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759387 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9f7kc\" (UniqueName: \"kubernetes.io/projected/7f60e56b-3881-49ee-be41-5435327c1be3-kube-api-access-9f7kc\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759400 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-55fbs\" (UniqueName: \"kubernetes.io/projected/d1e2eec1-1c52-4e62-b697-b308e89e1377-kube-api-access-55fbs\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759411 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759425 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759436 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759448 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759836 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-utilities" (OuterVolumeSpecName: "utilities") pod "c7a5230e-8980-4561-bfb3-015283fcbaa4" (UID: "c7a5230e-8980-4561-bfb3-015283fcbaa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.760094 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-utilities" (OuterVolumeSpecName: "utilities") pod "ab8f756d-4492-4dfc-ae46-80bb93dd6d86" (UID: "ab8f756d-4492-4dfc-ae46-80bb93dd6d86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.763371 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a5230e-8980-4561-bfb3-015283fcbaa4-kube-api-access-lmw2t" (OuterVolumeSpecName: "kube-api-access-lmw2t") pod "c7a5230e-8980-4561-bfb3-015283fcbaa4" (UID: "c7a5230e-8980-4561-bfb3-015283fcbaa4"). InnerVolumeSpecName "kube-api-access-lmw2t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.764762 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-kube-api-access-drd6d" (OuterVolumeSpecName: "kube-api-access-drd6d") pod "ab8f756d-4492-4dfc-ae46-80bb93dd6d86" (UID: "ab8f756d-4492-4dfc-ae46-80bb93dd6d86"). InnerVolumeSpecName "kube-api-access-drd6d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.772714 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7a5230e-8980-4561-bfb3-015283fcbaa4" (UID: "c7a5230e-8980-4561-bfb3-015283fcbaa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.860697 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.860737 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.860747 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lmw2t\" (UniqueName: \"kubernetes.io/projected/c7a5230e-8980-4561-bfb3-015283fcbaa4-kube-api-access-lmw2t\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.860761 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-drd6d\" (UniqueName: \"kubernetes.io/projected/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-kube-api-access-drd6d\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.860770 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.862173 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab8f756d-4492-4dfc-ae46-80bb93dd6d86" (UID: "ab8f756d-4492-4dfc-ae46-80bb93dd6d86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.949923 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-t6j5g"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.962079 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.512289 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.512286 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" event={"ID":"7f60e56b-3881-49ee-be41-5435327c1be3","Type":"ContainerDied","Data":"b13ed7e02312952627a8fe290f3f42545cea89e59d6401fe8e6ee3b38f6bedcd"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.512852 5108 scope.go:117] "RemoveContainer" containerID="5a87ce4dbe06f64afb1f619d8b0c573d04b896291877c1eda1d92c83341dfdde" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.517016 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerDied","Data":"91f5baffdf47edb0dcf278405ff6c3e8bfcf6fb2a306cd416c02fa78eef020a8"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.517055 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.519566 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52cvp" event={"ID":"ef823528-7549-4a91-83c9-e5b243ecb37c","Type":"ContainerDied","Data":"f00eee2df222a89df8cd42cafd662c24a80cb3735fd8845f8256dd421fcd07cf"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.519607 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.522714 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.523437 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerDied","Data":"eb0a00b12767c4ff782045029b2e342458acfc4bf6b005b9598c899c329f4a88"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.524892 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" event={"ID":"e18aabab-6cfe-4b88-9efd-a44ecbcace87","Type":"ContainerStarted","Data":"051efece92d82137dd9b5124a826a948d42ddda520b6f14ed690e01ec2e92d42"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.524923 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" event={"ID":"e18aabab-6cfe-4b88-9efd-a44ecbcace87","Type":"ContainerStarted","Data":"b0e2467682612494f5f331113e372242f9f4b19ec7c4adfdf40f6ac8753455cf"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.526138 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.528553 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.529419 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzh6n" event={"ID":"c7a5230e-8980-4561-bfb3-015283fcbaa4","Type":"ContainerDied","Data":"ea9359a1525df7dedd3d0704fa36125a2831836999184f23e64643dd75e53b0e"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.533049 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.533095 5108 scope.go:117] "RemoveContainer" containerID="3f0b7cceb8942beae974160beea654ece1ffcbdf5f51cb46e2bcafac40dd76f7" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.560243 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" podStartSLOduration=1.560210997 podStartE2EDuration="1.560210997s" podCreationTimestamp="2026-02-02 00:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:29.558369453 +0000 UTC m=+328.833866463" watchObservedRunningTime="2026-02-02 00:15:29.560210997 +0000 UTC m=+328.835707927" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.573349 5108 scope.go:117] "RemoveContainer" containerID="5d731cd91d7fa626117bbc5d945723e255f66a42540c3ed2667dd196c604f711" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.610538 5108 scope.go:117] "RemoveContainer" containerID="c8b60dd30800821a50c8edf3cedf017fa85abf0860ba13bd51115ac055be3dc4" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.610660 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8l8nm"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.616270 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8l8nm"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.659632 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4h5k"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.663931 5108 scope.go:117] "RemoveContainer" containerID="44c29c35f3f042606025783238fe84449fa274df709647a8bb2c6f5b25f6ea6a" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.684449 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g4h5k"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.698381 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmvtw"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.702990 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmvtw"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.707143 5108 scope.go:117] "RemoveContainer" containerID="e6aef248a8876a5e2dc03274ba4ae95994c688af754968e8c9c65f4a76f03504" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.707311 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-52cvp"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.710636 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-52cvp"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.713799 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzh6n"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.722048 5108 scope.go:117] "RemoveContainer" containerID="9b5a92a0aba545b8dbaeed6f9c1fc9550f60e0adaa5e10b74e9cc24a24cfad00" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.724294 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzh6n"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.738963 5108 scope.go:117] "RemoveContainer" containerID="0df55c9f0ebaec40aacdfbba7ebb6e0073cb9d22b3cdc2120d6cd95d09159f3c" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.755005 5108 scope.go:117] "RemoveContainer" containerID="f739b14449c93c7de2447b64c031f8bff42355230b104d5359e8914ee83f1bb1" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.770218 5108 scope.go:117] "RemoveContainer" containerID="f04bb6768ab8660dd418d641eb48dd64d23f0bc1405200098b46dd1e736803c3" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.785058 5108 scope.go:117] "RemoveContainer" containerID="7027daeb8294c638005dbc109971ebb173c299ff05d37653d85c7855028e63bd" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.796563 5108 scope.go:117] "RemoveContainer" containerID="9a151e0c7d30d225dcdec2ca4f289d179587e1b95d1e6242438eb1c220d1f684" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.812297 5108 scope.go:117] "RemoveContainer" containerID="2e1ed35cecd83ec6e1cd535df757ea287981a6c7aebb8cec80b33fdbbc5c5139" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.274464 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-66j84"] Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.274995 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275014 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275028 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275034 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275043 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275049 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275062 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275067 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275075 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275080 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275091 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275096 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275109 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275115 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275127 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275133 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275143 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275149 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275156 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275161 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275168 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275174 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275181 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275186 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275194 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275199 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275324 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275334 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275343 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275349 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275357 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275367 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275460 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275467 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.591381 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66j84"] Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.591839 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rttj6"] Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.591640 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.595079 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.640624 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rttj6"] Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.640831 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.643132 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.692632 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fc8227-87b8-4b48-9efa-da7031ec6c27-catalog-content\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.692722 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fc8227-87b8-4b48-9efa-da7031ec6c27-utilities\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.692794 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4zt\" (UniqueName: \"kubernetes.io/projected/32fc8227-87b8-4b48-9efa-da7031ec6c27-kube-api-access-kd4zt\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.692882 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cf2dc5-b96a-4ed9-acfe-435ef357e479-utilities\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.692909 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cf2dc5-b96a-4ed9-acfe-435ef357e479-catalog-content\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.692933 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsm5n\" (UniqueName: \"kubernetes.io/projected/47cf2dc5-b96a-4ed9-acfe-435ef357e479-kube-api-access-hsm5n\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.793786 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kd4zt\" (UniqueName: \"kubernetes.io/projected/32fc8227-87b8-4b48-9efa-da7031ec6c27-kube-api-access-kd4zt\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.793847 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cf2dc5-b96a-4ed9-acfe-435ef357e479-utilities\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794065 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cf2dc5-b96a-4ed9-acfe-435ef357e479-catalog-content\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794148 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hsm5n\" (UniqueName: \"kubernetes.io/projected/47cf2dc5-b96a-4ed9-acfe-435ef357e479-kube-api-access-hsm5n\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794336 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cf2dc5-b96a-4ed9-acfe-435ef357e479-utilities\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794381 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cf2dc5-b96a-4ed9-acfe-435ef357e479-catalog-content\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794445 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fc8227-87b8-4b48-9efa-da7031ec6c27-catalog-content\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794538 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fc8227-87b8-4b48-9efa-da7031ec6c27-utilities\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794783 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fc8227-87b8-4b48-9efa-da7031ec6c27-catalog-content\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794890 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fc8227-87b8-4b48-9efa-da7031ec6c27-utilities\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.816241 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd4zt\" (UniqueName: \"kubernetes.io/projected/32fc8227-87b8-4b48-9efa-da7031ec6c27-kube-api-access-kd4zt\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.816298 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsm5n\" (UniqueName: \"kubernetes.io/projected/47cf2dc5-b96a-4ed9-acfe-435ef357e479-kube-api-access-hsm5n\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.912554 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.960403 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.211143 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rttj6"] Feb 02 00:15:31 crc kubenswrapper[5108]: W0202 00:15:31.215002 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47cf2dc5_b96a_4ed9_acfe_435ef357e479.slice/crio-e5fd5c01044477e625ce0f1585cf68755d03a7346d001f10c8956bec5867d378 WatchSource:0}: Error finding container e5fd5c01044477e625ce0f1585cf68755d03a7346d001f10c8956bec5867d378: Status 404 returned error can't find the container with id e5fd5c01044477e625ce0f1585cf68755d03a7346d001f10c8956bec5867d378 Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.342402 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66j84"] Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.566466 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" path="/var/lib/kubelet/pods/7f60e56b-3881-49ee-be41-5435327c1be3/volumes" Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.567401 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" path="/var/lib/kubelet/pods/ab8f756d-4492-4dfc-ae46-80bb93dd6d86/volumes" Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.568446 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" path="/var/lib/kubelet/pods/c7a5230e-8980-4561-bfb3-015283fcbaa4/volumes" Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.569811 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" path="/var/lib/kubelet/pods/d1e2eec1-1c52-4e62-b697-b308e89e1377/volumes" Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.577562 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" path="/var/lib/kubelet/pods/ef823528-7549-4a91-83c9-e5b243ecb37c/volumes" Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.578239 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66j84" event={"ID":"32fc8227-87b8-4b48-9efa-da7031ec6c27","Type":"ContainerStarted","Data":"0dd82895d8d5d0659dc7fa38f7be9b023ed8b7d64300cb40f8165b2618660d76"} Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.578273 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttj6" event={"ID":"47cf2dc5-b96a-4ed9-acfe-435ef357e479","Type":"ContainerStarted","Data":"e5fd5c01044477e625ce0f1585cf68755d03a7346d001f10c8956bec5867d378"} Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.571172 5108 generic.go:358] "Generic (PLEG): container finished" podID="32fc8227-87b8-4b48-9efa-da7031ec6c27" containerID="d959d84a0f4b7b71870495427d00ae74eb4e53a953103b78a04200808fa086cd" exitCode=0 Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.571334 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66j84" event={"ID":"32fc8227-87b8-4b48-9efa-da7031ec6c27","Type":"ContainerDied","Data":"d959d84a0f4b7b71870495427d00ae74eb4e53a953103b78a04200808fa086cd"} Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.574950 5108 generic.go:358] "Generic (PLEG): container finished" podID="47cf2dc5-b96a-4ed9-acfe-435ef357e479" containerID="a11015bd30daa66b35f11475c271f148a8c0e46d729b4f21e99d0f802f918818" exitCode=0 Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.575030 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttj6" event={"ID":"47cf2dc5-b96a-4ed9-acfe-435ef357e479","Type":"ContainerDied","Data":"a11015bd30daa66b35f11475c271f148a8c0e46d729b4f21e99d0f802f918818"} Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.677565 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cckv4"] Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.950331 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckv4"] Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.950693 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jwrx9"] Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.950526 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.953624 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.030247 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-catalog-content\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.030316 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-utilities\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.030353 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4ntj\" (UniqueName: \"kubernetes.io/projected/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-kube-api-access-c4ntj\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.069172 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwrx9"] Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.069572 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.072370 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.131789 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76p7j\" (UniqueName: \"kubernetes.io/projected/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-kube-api-access-76p7j\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.131872 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-catalog-content\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.131925 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-utilities\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.131966 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c4ntj\" (UniqueName: \"kubernetes.io/projected/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-kube-api-access-c4ntj\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.132007 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-utilities\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.132034 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-catalog-content\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.132662 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-catalog-content\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.133073 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-utilities\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.157010 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4ntj\" (UniqueName: \"kubernetes.io/projected/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-kube-api-access-c4ntj\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.233844 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-catalog-content\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.234138 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-76p7j\" (UniqueName: \"kubernetes.io/projected/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-kube-api-access-76p7j\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.234311 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-utilities\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.234588 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-catalog-content\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.234877 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-utilities\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.256919 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-76p7j\" (UniqueName: \"kubernetes.io/projected/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-kube-api-access-76p7j\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.296343 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.386887 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.726198 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckv4"] Feb 02 00:15:33 crc kubenswrapper[5108]: W0202 00:15:33.739441 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cf96b4d_fc9a_4ed1_9383_fb367f5a05de.slice/crio-8f80f46a1e430bbf0bdd470106ede3f5f57d87904d6e8abf62bdcd95557040b0 WatchSource:0}: Error finding container 8f80f46a1e430bbf0bdd470106ede3f5f57d87904d6e8abf62bdcd95557040b0: Status 404 returned error can't find the container with id 8f80f46a1e430bbf0bdd470106ede3f5f57d87904d6e8abf62bdcd95557040b0 Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.937319 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwrx9"] Feb 02 00:15:33 crc kubenswrapper[5108]: W0202 00:15:33.949905 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07e00e0c_ae6b_40eb_b439_06e770ecfc2a.slice/crio-3a767d8000380188ba9e582a5942221ecfcdc5629f2d755a861545d42ab829e1 WatchSource:0}: Error finding container 3a767d8000380188ba9e582a5942221ecfcdc5629f2d755a861545d42ab829e1: Status 404 returned error can't find the container with id 3a767d8000380188ba9e582a5942221ecfcdc5629f2d755a861545d42ab829e1 Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.614927 5108 generic.go:358] "Generic (PLEG): container finished" podID="07e00e0c-ae6b-40eb-b439-06e770ecfc2a" containerID="d40fbb7dc5b56f14c50a9e5bb126a49d75f6a90e7aa0cbb941f24d67bc1317f9" exitCode=0 Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.614974 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwrx9" event={"ID":"07e00e0c-ae6b-40eb-b439-06e770ecfc2a","Type":"ContainerDied","Data":"d40fbb7dc5b56f14c50a9e5bb126a49d75f6a90e7aa0cbb941f24d67bc1317f9"} Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.615712 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwrx9" event={"ID":"07e00e0c-ae6b-40eb-b439-06e770ecfc2a","Type":"ContainerStarted","Data":"3a767d8000380188ba9e582a5942221ecfcdc5629f2d755a861545d42ab829e1"} Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.623250 5108 generic.go:358] "Generic (PLEG): container finished" podID="32fc8227-87b8-4b48-9efa-da7031ec6c27" containerID="243cfd976efb56c1fbd3914ef3a3b9d9975c07131d7b2126faa470f0685ebaae" exitCode=0 Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.623325 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66j84" event={"ID":"32fc8227-87b8-4b48-9efa-da7031ec6c27","Type":"ContainerDied","Data":"243cfd976efb56c1fbd3914ef3a3b9d9975c07131d7b2126faa470f0685ebaae"} Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.628318 5108 generic.go:358] "Generic (PLEG): container finished" podID="47cf2dc5-b96a-4ed9-acfe-435ef357e479" containerID="8dc2e03b98df24dbfda41a5175c2a7c82b40a3bf42a22fa3f2f3d29f101f49ef" exitCode=0 Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.628381 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttj6" event={"ID":"47cf2dc5-b96a-4ed9-acfe-435ef357e479","Type":"ContainerDied","Data":"8dc2e03b98df24dbfda41a5175c2a7c82b40a3bf42a22fa3f2f3d29f101f49ef"} Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.635126 5108 generic.go:358] "Generic (PLEG): container finished" podID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerID="66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0" exitCode=0 Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.635295 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckv4" event={"ID":"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de","Type":"ContainerDied","Data":"66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0"} Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.635327 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckv4" event={"ID":"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de","Type":"ContainerStarted","Data":"8f80f46a1e430bbf0bdd470106ede3f5f57d87904d6e8abf62bdcd95557040b0"} Feb 02 00:15:35 crc kubenswrapper[5108]: I0202 00:15:35.642819 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66j84" event={"ID":"32fc8227-87b8-4b48-9efa-da7031ec6c27","Type":"ContainerStarted","Data":"666a9143a79043e670103b2fdc2070e9e2a7e8f14e82dd5a4f49644e5d71cb31"} Feb 02 00:15:35 crc kubenswrapper[5108]: I0202 00:15:35.644591 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttj6" event={"ID":"47cf2dc5-b96a-4ed9-acfe-435ef357e479","Type":"ContainerStarted","Data":"a433a86a43d536e9ad3c94986300b1a6f329f18d06d96689496a472b756c2df2"} Feb 02 00:15:35 crc kubenswrapper[5108]: I0202 00:15:35.661353 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-66j84" podStartSLOduration=4.355130613 podStartE2EDuration="5.661335613s" podCreationTimestamp="2026-02-02 00:15:30 +0000 UTC" firstStartedPulling="2026-02-02 00:15:32.572924723 +0000 UTC m=+331.848421683" lastFinishedPulling="2026-02-02 00:15:33.879129753 +0000 UTC m=+333.154626683" observedRunningTime="2026-02-02 00:15:35.660338513 +0000 UTC m=+334.935835473" watchObservedRunningTime="2026-02-02 00:15:35.661335613 +0000 UTC m=+334.936832543" Feb 02 00:15:35 crc kubenswrapper[5108]: I0202 00:15:35.685261 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rttj6" podStartSLOduration=4.423021046 podStartE2EDuration="5.685237998s" podCreationTimestamp="2026-02-02 00:15:30 +0000 UTC" firstStartedPulling="2026-02-02 00:15:32.576042245 +0000 UTC m=+331.851539175" lastFinishedPulling="2026-02-02 00:15:33.838259197 +0000 UTC m=+333.113756127" observedRunningTime="2026-02-02 00:15:35.680488158 +0000 UTC m=+334.955985098" watchObservedRunningTime="2026-02-02 00:15:35.685237998 +0000 UTC m=+334.960734928" Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.198747 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" podUID="51ba194a-1171-4ed4-a843-0c39ac61d268" containerName="registry" containerID="cri-o://527145b28c45c3ea8eb6f6c44f7c51865dd5843b1597aa9cf927f7436a5c19fe" gracePeriod=30 Feb 02 00:15:36 crc kubenswrapper[5108]: E0202 00:15:36.380987 5108 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07e00e0c_ae6b_40eb_b439_06e770ecfc2a.slice/crio-conmon-d6d1ceb2d019203e910a84570ad552dc3de6d75db6f95ea52f0fd54aab6024d2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51ba194a_1171_4ed4_a843_0c39ac61d268.slice/crio-conmon-527145b28c45c3ea8eb6f6c44f7c51865dd5843b1597aa9cf927f7436a5c19fe.scope\": RecentStats: unable to find data in memory cache]" Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.456712 5108 patch_prober.go:28] interesting pod/image-registry-66587d64c8-mjr86 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.22:5000/healthz\": dial tcp 10.217.0.22:5000: connect: connection refused" start-of-body= Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.456802 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" podUID="51ba194a-1171-4ed4-a843-0c39ac61d268" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.22:5000/healthz\": dial tcp 10.217.0.22:5000: connect: connection refused" Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.651402 5108 generic.go:358] "Generic (PLEG): container finished" podID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerID="c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c" exitCode=0 Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.651457 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckv4" event={"ID":"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de","Type":"ContainerDied","Data":"c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c"} Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.653533 5108 generic.go:358] "Generic (PLEG): container finished" podID="51ba194a-1171-4ed4-a843-0c39ac61d268" containerID="527145b28c45c3ea8eb6f6c44f7c51865dd5843b1597aa9cf927f7436a5c19fe" exitCode=0 Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.653751 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" event={"ID":"51ba194a-1171-4ed4-a843-0c39ac61d268","Type":"ContainerDied","Data":"527145b28c45c3ea8eb6f6c44f7c51865dd5843b1597aa9cf927f7436a5c19fe"} Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.658465 5108 generic.go:358] "Generic (PLEG): container finished" podID="07e00e0c-ae6b-40eb-b439-06e770ecfc2a" containerID="d6d1ceb2d019203e910a84570ad552dc3de6d75db6f95ea52f0fd54aab6024d2" exitCode=0 Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.659915 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwrx9" event={"ID":"07e00e0c-ae6b-40eb-b439-06e770ecfc2a","Type":"ContainerDied","Data":"d6d1ceb2d019203e910a84570ad552dc3de6d75db6f95ea52f0fd54aab6024d2"} Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.228468 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.302648 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-tls\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.302738 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-bound-sa-token\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.302771 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqbvn\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-kube-api-access-sqbvn\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.302866 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ba194a-1171-4ed4-a843-0c39ac61d268-installation-pull-secrets\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.302902 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-trusted-ca\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.302925 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ba194a-1171-4ed4-a843-0c39ac61d268-ca-trust-extracted\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.303090 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.303144 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-certificates\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.304482 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.305032 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.316885 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.317185 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ba194a-1171-4ed4-a843-0c39ac61d268-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.324397 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.325928 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ba194a-1171-4ed4-a843-0c39ac61d268-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.327397 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-kube-api-access-sqbvn" (OuterVolumeSpecName: "kube-api-access-sqbvn") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "kube-api-access-sqbvn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.328867 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404812 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404849 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sqbvn\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-kube-api-access-sqbvn\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404860 5108 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ba194a-1171-4ed4-a843-0c39ac61d268-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404870 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404879 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ba194a-1171-4ed4-a843-0c39ac61d268-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404887 5108 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404895 5108 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.667407 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckv4" event={"ID":"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de","Type":"ContainerStarted","Data":"428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8"} Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.668821 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" event={"ID":"51ba194a-1171-4ed4-a843-0c39ac61d268","Type":"ContainerDied","Data":"1447dcac9c96a7085eca20122133eb4f717b3af0915a27a86280d315ab8e69c0"} Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.668858 5108 scope.go:117] "RemoveContainer" containerID="527145b28c45c3ea8eb6f6c44f7c51865dd5843b1597aa9cf927f7436a5c19fe" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.669038 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.671832 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwrx9" event={"ID":"07e00e0c-ae6b-40eb-b439-06e770ecfc2a","Type":"ContainerStarted","Data":"af70f43b3c041d3cb1b22e029fe41d4a22fa982aa4755053d2298e608695b0ba"} Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.701128 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cckv4" podStartSLOduration=4.838770985 podStartE2EDuration="5.701113241s" podCreationTimestamp="2026-02-02 00:15:32 +0000 UTC" firstStartedPulling="2026-02-02 00:15:34.636484394 +0000 UTC m=+333.911981324" lastFinishedPulling="2026-02-02 00:15:35.49882665 +0000 UTC m=+334.774323580" observedRunningTime="2026-02-02 00:15:37.696756932 +0000 UTC m=+336.972253862" watchObservedRunningTime="2026-02-02 00:15:37.701113241 +0000 UTC m=+336.976610171" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.716019 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mjr86"] Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.717794 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mjr86"] Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.730934 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jwrx9" podStartSLOduration=4.85897181 podStartE2EDuration="5.73092171s" podCreationTimestamp="2026-02-02 00:15:32 +0000 UTC" firstStartedPulling="2026-02-02 00:15:34.616735551 +0000 UTC m=+333.892232511" lastFinishedPulling="2026-02-02 00:15:35.488685481 +0000 UTC m=+334.764182411" observedRunningTime="2026-02-02 00:15:37.727955492 +0000 UTC m=+337.003452432" watchObservedRunningTime="2026-02-02 00:15:37.73092171 +0000 UTC m=+337.006418640" Feb 02 00:15:39 crc kubenswrapper[5108]: I0202 00:15:39.565107 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ba194a-1171-4ed4-a843-0c39ac61d268" path="/var/lib/kubelet/pods/51ba194a-1171-4ed4-a843-0c39ac61d268/volumes" Feb 02 00:15:40 crc kubenswrapper[5108]: I0202 00:15:40.913740 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:40 crc kubenswrapper[5108]: I0202 00:15:40.914095 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:40 crc kubenswrapper[5108]: I0202 00:15:40.961883 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:40 crc kubenswrapper[5108]: I0202 00:15:40.961945 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:40 crc kubenswrapper[5108]: I0202 00:15:40.965570 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:41 crc kubenswrapper[5108]: I0202 00:15:41.008806 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:41 crc kubenswrapper[5108]: I0202 00:15:41.747258 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:41 crc kubenswrapper[5108]: I0202 00:15:41.756455 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.237691 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65678dd567-lql72"] Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.238416 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" podUID="77d8873e-3275-40a4-987d-a8d2f5489461" containerName="controller-manager" containerID="cri-o://6d76ca120eed12f5955fe4993b5e130be9e960cdb6b5ad865d61be03b84b9de0" gracePeriod=30 Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.270454 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx"] Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.270742 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" podUID="36503b52-c5de-4acc-9b2d-4b006a58c586" containerName="route-controller-manager" containerID="cri-o://51422b9b14c5e121e52c764cd05f2c885e1a9040876867e3b6e98ed49215c05a" gracePeriod=30 Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.704374 5108 generic.go:358] "Generic (PLEG): container finished" podID="77d8873e-3275-40a4-987d-a8d2f5489461" containerID="6d76ca120eed12f5955fe4993b5e130be9e960cdb6b5ad865d61be03b84b9de0" exitCode=0 Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.704960 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" event={"ID":"77d8873e-3275-40a4-987d-a8d2f5489461","Type":"ContainerDied","Data":"6d76ca120eed12f5955fe4993b5e130be9e960cdb6b5ad865d61be03b84b9de0"} Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.706758 5108 generic.go:358] "Generic (PLEG): container finished" podID="36503b52-c5de-4acc-9b2d-4b006a58c586" containerID="51422b9b14c5e121e52c764cd05f2c885e1a9040876867e3b6e98ed49215c05a" exitCode=0 Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.706879 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" event={"ID":"36503b52-c5de-4acc-9b2d-4b006a58c586","Type":"ContainerDied","Data":"51422b9b14c5e121e52c764cd05f2c885e1a9040876867e3b6e98ed49215c05a"} Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.296984 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.297352 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.345020 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.387467 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.387517 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.396087 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.425324 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg"] Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.425879 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36503b52-c5de-4acc-9b2d-4b006a58c586" containerName="route-controller-manager" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.425896 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="36503b52-c5de-4acc-9b2d-4b006a58c586" containerName="route-controller-manager" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.425906 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="51ba194a-1171-4ed4-a843-0c39ac61d268" containerName="registry" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.425912 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ba194a-1171-4ed4-a843-0c39ac61d268" containerName="registry" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.426015 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="51ba194a-1171-4ed4-a843-0c39ac61d268" containerName="registry" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.426025 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="36503b52-c5de-4acc-9b2d-4b006a58c586" containerName="route-controller-manager" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.500497 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-client-ca\") pod \"36503b52-c5de-4acc-9b2d-4b006a58c586\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.500864 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36503b52-c5de-4acc-9b2d-4b006a58c586-tmp\") pod \"36503b52-c5de-4acc-9b2d-4b006a58c586\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.501343 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36503b52-c5de-4acc-9b2d-4b006a58c586-tmp" (OuterVolumeSpecName: "tmp") pod "36503b52-c5de-4acc-9b2d-4b006a58c586" (UID: "36503b52-c5de-4acc-9b2d-4b006a58c586"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.501377 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36503b52-c5de-4acc-9b2d-4b006a58c586-serving-cert\") pod \"36503b52-c5de-4acc-9b2d-4b006a58c586\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.501468 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5gmn\" (UniqueName: \"kubernetes.io/projected/36503b52-c5de-4acc-9b2d-4b006a58c586-kube-api-access-q5gmn\") pod \"36503b52-c5de-4acc-9b2d-4b006a58c586\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.501569 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-config\") pod \"36503b52-c5de-4acc-9b2d-4b006a58c586\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.502104 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36503b52-c5de-4acc-9b2d-4b006a58c586-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.502137 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-client-ca" (OuterVolumeSpecName: "client-ca") pod "36503b52-c5de-4acc-9b2d-4b006a58c586" (UID: "36503b52-c5de-4acc-9b2d-4b006a58c586"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.502766 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-config" (OuterVolumeSpecName: "config") pod "36503b52-c5de-4acc-9b2d-4b006a58c586" (UID: "36503b52-c5de-4acc-9b2d-4b006a58c586"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.506957 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36503b52-c5de-4acc-9b2d-4b006a58c586-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "36503b52-c5de-4acc-9b2d-4b006a58c586" (UID: "36503b52-c5de-4acc-9b2d-4b006a58c586"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.508947 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36503b52-c5de-4acc-9b2d-4b006a58c586-kube-api-access-q5gmn" (OuterVolumeSpecName: "kube-api-access-q5gmn") pod "36503b52-c5de-4acc-9b2d-4b006a58c586" (UID: "36503b52-c5de-4acc-9b2d-4b006a58c586"). InnerVolumeSpecName "kube-api-access-q5gmn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.517624 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg"] Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.517809 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.517928 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.607290 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.607331 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.607343 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36503b52-c5de-4acc-9b2d-4b006a58c586-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.607354 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q5gmn\" (UniqueName: \"kubernetes.io/projected/36503b52-c5de-4acc-9b2d-4b006a58c586-kube-api-access-q5gmn\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.682911 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.708229 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghk22\" (UniqueName: \"kubernetes.io/projected/cc5b803c-69f0-47e3-89b1-54dadfc985a6-kube-api-access-ghk22\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.708279 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc5b803c-69f0-47e3-89b1-54dadfc985a6-client-ca\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.708304 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc5b803c-69f0-47e3-89b1-54dadfc985a6-config\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.708325 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc5b803c-69f0-47e3-89b1-54dadfc985a6-tmp\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.708364 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc5b803c-69f0-47e3-89b1-54dadfc985a6-serving-cert\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.714865 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" event={"ID":"36503b52-c5de-4acc-9b2d-4b006a58c586","Type":"ContainerDied","Data":"bd55002ad86a550361e62870063a3fae4c4e9cc5bee2e68716b86baa8fdcd306"} Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.714914 5108 scope.go:117] "RemoveContainer" containerID="51422b9b14c5e121e52c764cd05f2c885e1a9040876867e3b6e98ed49215c05a" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.715059 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.719023 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.719030 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" event={"ID":"77d8873e-3275-40a4-987d-a8d2f5489461","Type":"ContainerDied","Data":"cbbb9d530c606d7c20d199a0daee8fc2b7af8b3c2f71306efb862a8569b37212"} Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.739730 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp"] Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.740364 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="77d8873e-3275-40a4-987d-a8d2f5489461" containerName="controller-manager" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.740382 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d8873e-3275-40a4-987d-a8d2f5489461" containerName="controller-manager" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.740484 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="77d8873e-3275-40a4-987d-a8d2f5489461" containerName="controller-manager" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.748435 5108 scope.go:117] "RemoveContainer" containerID="6d76ca120eed12f5955fe4993b5e130be9e960cdb6b5ad865d61be03b84b9de0" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.786049 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp"] Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.786084 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx"] Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.786141 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.786177 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.786187 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx"] Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.786387 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.811585 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-proxy-ca-bundles\") pod \"77d8873e-3275-40a4-987d-a8d2f5489461\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.811647 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77d8873e-3275-40a4-987d-a8d2f5489461-tmp\") pod \"77d8873e-3275-40a4-987d-a8d2f5489461\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.811686 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-config\") pod \"77d8873e-3275-40a4-987d-a8d2f5489461\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.811710 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-client-ca\") pod \"77d8873e-3275-40a4-987d-a8d2f5489461\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.812349 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77d8873e-3275-40a4-987d-a8d2f5489461-tmp" (OuterVolumeSpecName: "tmp") pod "77d8873e-3275-40a4-987d-a8d2f5489461" (UID: "77d8873e-3275-40a4-987d-a8d2f5489461"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.812903 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-client-ca" (OuterVolumeSpecName: "client-ca") pod "77d8873e-3275-40a4-987d-a8d2f5489461" (UID: "77d8873e-3275-40a4-987d-a8d2f5489461"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.812925 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "77d8873e-3275-40a4-987d-a8d2f5489461" (UID: "77d8873e-3275-40a4-987d-a8d2f5489461"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813095 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-config" (OuterVolumeSpecName: "config") pod "77d8873e-3275-40a4-987d-a8d2f5489461" (UID: "77d8873e-3275-40a4-987d-a8d2f5489461"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813109 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4kx5\" (UniqueName: \"kubernetes.io/projected/77d8873e-3275-40a4-987d-a8d2f5489461-kube-api-access-w4kx5\") pod \"77d8873e-3275-40a4-987d-a8d2f5489461\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813275 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d8873e-3275-40a4-987d-a8d2f5489461-serving-cert\") pod \"77d8873e-3275-40a4-987d-a8d2f5489461\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813432 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc5b803c-69f0-47e3-89b1-54dadfc985a6-serving-cert\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813475 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-tmp\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813538 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-client-ca\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813590 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-config\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813694 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghk22\" (UniqueName: \"kubernetes.io/projected/cc5b803c-69f0-47e3-89b1-54dadfc985a6-kube-api-access-ghk22\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813737 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc5b803c-69f0-47e3-89b1-54dadfc985a6-client-ca\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813761 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdhrz\" (UniqueName: \"kubernetes.io/projected/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-kube-api-access-wdhrz\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813788 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-serving-cert\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813863 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-proxy-ca-bundles\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813898 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc5b803c-69f0-47e3-89b1-54dadfc985a6-config\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.815972 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc5b803c-69f0-47e3-89b1-54dadfc985a6-client-ca\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.817988 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc5b803c-69f0-47e3-89b1-54dadfc985a6-tmp\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.818468 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc5b803c-69f0-47e3-89b1-54dadfc985a6-tmp\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.818738 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.818769 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77d8873e-3275-40a4-987d-a8d2f5489461-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.818789 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.818804 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.819579 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc5b803c-69f0-47e3-89b1-54dadfc985a6-config\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.820530 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77d8873e-3275-40a4-987d-a8d2f5489461-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "77d8873e-3275-40a4-987d-a8d2f5489461" (UID: "77d8873e-3275-40a4-987d-a8d2f5489461"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.828661 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc5b803c-69f0-47e3-89b1-54dadfc985a6-serving-cert\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.840449 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77d8873e-3275-40a4-987d-a8d2f5489461-kube-api-access-w4kx5" (OuterVolumeSpecName: "kube-api-access-w4kx5") pod "77d8873e-3275-40a4-987d-a8d2f5489461" (UID: "77d8873e-3275-40a4-987d-a8d2f5489461"). InnerVolumeSpecName "kube-api-access-w4kx5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.848098 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghk22\" (UniqueName: \"kubernetes.io/projected/cc5b803c-69f0-47e3-89b1-54dadfc985a6-kube-api-access-ghk22\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.865598 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.922899 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-client-ca\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.922950 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-config\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.923154 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wdhrz\" (UniqueName: \"kubernetes.io/projected/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-kube-api-access-wdhrz\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.923253 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-serving-cert\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.923277 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-proxy-ca-bundles\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.923768 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-tmp\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.924068 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4kx5\" (UniqueName: \"kubernetes.io/projected/77d8873e-3275-40a4-987d-a8d2f5489461-kube-api-access-w4kx5\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.924092 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d8873e-3275-40a4-987d-a8d2f5489461-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.924584 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-client-ca\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.924992 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-tmp\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.925394 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-config\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.925507 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-proxy-ca-bundles\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.932363 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-serving-cert\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.945058 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdhrz\" (UniqueName: \"kubernetes.io/projected/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-kube-api-access-wdhrz\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.064063 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65678dd567-lql72"] Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.067875 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65678dd567-lql72"] Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.100939 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.325918 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg"] Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.529817 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp"] Feb 02 00:15:44 crc kubenswrapper[5108]: W0202 00:15:44.539682 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27cfbd17_fe89_42f2_8cbf_ba0587c2e216.slice/crio-7442d8683d2c3a59fd61250fa32ce56c085c5162e065f24015c9fdfd47774def WatchSource:0}: Error finding container 7442d8683d2c3a59fd61250fa32ce56c085c5162e065f24015c9fdfd47774def: Status 404 returned error can't find the container with id 7442d8683d2c3a59fd61250fa32ce56c085c5162e065f24015c9fdfd47774def Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.724886 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" event={"ID":"27cfbd17-fe89-42f2-8cbf-ba0587c2e216","Type":"ContainerStarted","Data":"91a24f251d05389d13c6a20c13002484b1140f85b6cb416ae2bde6d84d328b2a"} Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.724948 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" event={"ID":"27cfbd17-fe89-42f2-8cbf-ba0587c2e216","Type":"ContainerStarted","Data":"7442d8683d2c3a59fd61250fa32ce56c085c5162e065f24015c9fdfd47774def"} Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.725330 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.728272 5108 patch_prober.go:28] interesting pod/controller-manager-577b8bfd5c-8n7dp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.728339 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" podUID="27cfbd17-fe89-42f2-8cbf-ba0587c2e216" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.731095 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" event={"ID":"cc5b803c-69f0-47e3-89b1-54dadfc985a6","Type":"ContainerStarted","Data":"6ed6957229bbe464a286863bea6453b78fd9ff6c983cb4cd8723f9a91d1892b6"} Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.731131 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" event={"ID":"cc5b803c-69f0-47e3-89b1-54dadfc985a6","Type":"ContainerStarted","Data":"986223b802da487179d036e6cc603afcadfbd026d94190f2f1fbb2264bc934fd"} Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.731588 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.748654 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" podStartSLOduration=2.748635552 podStartE2EDuration="2.748635552s" podCreationTimestamp="2026-02-02 00:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:44.748085315 +0000 UTC m=+344.023582255" watchObservedRunningTime="2026-02-02 00:15:44.748635552 +0000 UTC m=+344.024132482" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.773526 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" podStartSLOduration=2.773505845 podStartE2EDuration="2.773505845s" podCreationTimestamp="2026-02-02 00:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:44.77265699 +0000 UTC m=+344.048153930" watchObservedRunningTime="2026-02-02 00:15:44.773505845 +0000 UTC m=+344.049002775" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.996366 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:45 crc kubenswrapper[5108]: I0202 00:15:45.564874 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36503b52-c5de-4acc-9b2d-4b006a58c586" path="/var/lib/kubelet/pods/36503b52-c5de-4acc-9b2d-4b006a58c586/volumes" Feb 02 00:15:45 crc kubenswrapper[5108]: I0202 00:15:45.567105 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77d8873e-3275-40a4-987d-a8d2f5489461" path="/var/lib/kubelet/pods/77d8873e-3275-40a4-987d-a8d2f5489461/volumes" Feb 02 00:15:45 crc kubenswrapper[5108]: I0202 00:15:45.749293 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.147409 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499856-n677f"] Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.157438 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499856-n677f"] Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.157606 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.159664 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.160754 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.161286 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.271584 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4s5s\" (UniqueName: \"kubernetes.io/projected/b2d68061-8bea-4670-828e-3fd982547198-kube-api-access-w4s5s\") pod \"auto-csr-approver-29499856-n677f\" (UID: \"b2d68061-8bea-4670-828e-3fd982547198\") " pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.372693 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4s5s\" (UniqueName: \"kubernetes.io/projected/b2d68061-8bea-4670-828e-3fd982547198-kube-api-access-w4s5s\") pod \"auto-csr-approver-29499856-n677f\" (UID: \"b2d68061-8bea-4670-828e-3fd982547198\") " pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.396318 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4s5s\" (UniqueName: \"kubernetes.io/projected/b2d68061-8bea-4670-828e-3fd982547198-kube-api-access-w4s5s\") pod \"auto-csr-approver-29499856-n677f\" (UID: \"b2d68061-8bea-4670-828e-3fd982547198\") " pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.491084 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.952600 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499856-n677f"] Feb 02 00:16:01 crc kubenswrapper[5108]: I0202 00:16:01.834737 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499856-n677f" event={"ID":"b2d68061-8bea-4670-828e-3fd982547198","Type":"ContainerStarted","Data":"6566d979307f3380d2c4f036bef1b6dbef18c8813653cec90a70aa044d64d0e3"} Feb 02 00:16:04 crc kubenswrapper[5108]: I0202 00:16:04.328690 5108 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-lw9mr" Feb 02 00:16:04 crc kubenswrapper[5108]: I0202 00:16:04.356594 5108 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-lw9mr" Feb 02 00:16:04 crc kubenswrapper[5108]: I0202 00:16:04.853854 5108 generic.go:358] "Generic (PLEG): container finished" podID="b2d68061-8bea-4670-828e-3fd982547198" containerID="b0d175fd10d4619cf043b11fd6ec6f1927ee4a1ffad44abf1e805ecf0fef43df" exitCode=0 Feb 02 00:16:04 crc kubenswrapper[5108]: I0202 00:16:04.854043 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499856-n677f" event={"ID":"b2d68061-8bea-4670-828e-3fd982547198","Type":"ContainerDied","Data":"b0d175fd10d4619cf043b11fd6ec6f1927ee4a1ffad44abf1e805ecf0fef43df"} Feb 02 00:16:05 crc kubenswrapper[5108]: I0202 00:16:05.357808 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-04 00:11:04 +0000 UTC" deadline="2026-02-23 01:35:38.826451502 +0000 UTC" Feb 02 00:16:05 crc kubenswrapper[5108]: I0202 00:16:05.357873 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="505h19m33.468584702s" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.288369 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.358290 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-04 00:11:04 +0000 UTC" deadline="2026-02-26 07:39:43.056299098 +0000 UTC" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.358330 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="583h23m36.697972586s" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.455998 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4s5s\" (UniqueName: \"kubernetes.io/projected/b2d68061-8bea-4670-828e-3fd982547198-kube-api-access-w4s5s\") pod \"b2d68061-8bea-4670-828e-3fd982547198\" (UID: \"b2d68061-8bea-4670-828e-3fd982547198\") " Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.462315 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d68061-8bea-4670-828e-3fd982547198-kube-api-access-w4s5s" (OuterVolumeSpecName: "kube-api-access-w4s5s") pod "b2d68061-8bea-4670-828e-3fd982547198" (UID: "b2d68061-8bea-4670-828e-3fd982547198"). InnerVolumeSpecName "kube-api-access-w4s5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.557482 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4s5s\" (UniqueName: \"kubernetes.io/projected/b2d68061-8bea-4670-828e-3fd982547198-kube-api-access-w4s5s\") on node \"crc\" DevicePath \"\"" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.867576 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.867605 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499856-n677f" event={"ID":"b2d68061-8bea-4670-828e-3fd982547198","Type":"ContainerDied","Data":"6566d979307f3380d2c4f036bef1b6dbef18c8813653cec90a70aa044d64d0e3"} Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.867657 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6566d979307f3380d2c4f036bef1b6dbef18c8813653cec90a70aa044d64d0e3" Feb 02 00:17:20 crc kubenswrapper[5108]: I0202 00:17:20.919061 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:17:20 crc kubenswrapper[5108]: I0202 00:17:20.919554 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:17:50 crc kubenswrapper[5108]: I0202 00:17:50.920044 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:17:50 crc kubenswrapper[5108]: I0202 00:17:50.921518 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.158787 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499858-dzzxv"] Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.160391 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b2d68061-8bea-4670-828e-3fd982547198" containerName="oc" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.160407 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d68061-8bea-4670-828e-3fd982547198" containerName="oc" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.160553 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="b2d68061-8bea-4670-828e-3fd982547198" containerName="oc" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.166887 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.169665 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499858-dzzxv"] Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.170153 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.170153 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.170187 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.276150 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb285\" (UniqueName: \"kubernetes.io/projected/431bfb08-11a6-4c66-893c-650ea32d97b3-kube-api-access-zb285\") pod \"auto-csr-approver-29499858-dzzxv\" (UID: \"431bfb08-11a6-4c66-893c-650ea32d97b3\") " pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.379010 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zb285\" (UniqueName: \"kubernetes.io/projected/431bfb08-11a6-4c66-893c-650ea32d97b3-kube-api-access-zb285\") pod \"auto-csr-approver-29499858-dzzxv\" (UID: \"431bfb08-11a6-4c66-893c-650ea32d97b3\") " pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.411524 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb285\" (UniqueName: \"kubernetes.io/projected/431bfb08-11a6-4c66-893c-650ea32d97b3-kube-api-access-zb285\") pod \"auto-csr-approver-29499858-dzzxv\" (UID: \"431bfb08-11a6-4c66-893c-650ea32d97b3\") " pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.500696 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.767071 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499858-dzzxv"] Feb 02 00:18:01 crc kubenswrapper[5108]: I0202 00:18:01.648708 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" event={"ID":"431bfb08-11a6-4c66-893c-650ea32d97b3","Type":"ContainerStarted","Data":"9c44358844cd8275a7e0441686ab61a17e123743a63f4d684b49bae3cad21589"} Feb 02 00:18:02 crc kubenswrapper[5108]: I0202 00:18:02.657400 5108 generic.go:358] "Generic (PLEG): container finished" podID="431bfb08-11a6-4c66-893c-650ea32d97b3" containerID="ff61ff81d7abb5723358d9eb219b89d933545279f212b14a8a7b31b99a0fd8b3" exitCode=0 Feb 02 00:18:02 crc kubenswrapper[5108]: I0202 00:18:02.657500 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" event={"ID":"431bfb08-11a6-4c66-893c-650ea32d97b3","Type":"ContainerDied","Data":"ff61ff81d7abb5723358d9eb219b89d933545279f212b14a8a7b31b99a0fd8b3"} Feb 02 00:18:03 crc kubenswrapper[5108]: I0202 00:18:03.956075 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:04 crc kubenswrapper[5108]: I0202 00:18:04.033933 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb285\" (UniqueName: \"kubernetes.io/projected/431bfb08-11a6-4c66-893c-650ea32d97b3-kube-api-access-zb285\") pod \"431bfb08-11a6-4c66-893c-650ea32d97b3\" (UID: \"431bfb08-11a6-4c66-893c-650ea32d97b3\") " Feb 02 00:18:04 crc kubenswrapper[5108]: I0202 00:18:04.043468 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431bfb08-11a6-4c66-893c-650ea32d97b3-kube-api-access-zb285" (OuterVolumeSpecName: "kube-api-access-zb285") pod "431bfb08-11a6-4c66-893c-650ea32d97b3" (UID: "431bfb08-11a6-4c66-893c-650ea32d97b3"). InnerVolumeSpecName "kube-api-access-zb285". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:18:04 crc kubenswrapper[5108]: I0202 00:18:04.135095 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zb285\" (UniqueName: \"kubernetes.io/projected/431bfb08-11a6-4c66-893c-650ea32d97b3-kube-api-access-zb285\") on node \"crc\" DevicePath \"\"" Feb 02 00:18:04 crc kubenswrapper[5108]: I0202 00:18:04.690568 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" event={"ID":"431bfb08-11a6-4c66-893c-650ea32d97b3","Type":"ContainerDied","Data":"9c44358844cd8275a7e0441686ab61a17e123743a63f4d684b49bae3cad21589"} Feb 02 00:18:04 crc kubenswrapper[5108]: I0202 00:18:04.690645 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c44358844cd8275a7e0441686ab61a17e123743a63f4d684b49bae3cad21589" Feb 02 00:18:04 crc kubenswrapper[5108]: I0202 00:18:04.690795 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:20 crc kubenswrapper[5108]: I0202 00:18:20.919445 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:18:20 crc kubenswrapper[5108]: I0202 00:18:20.921164 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:18:20 crc kubenswrapper[5108]: I0202 00:18:20.921313 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:18:20 crc kubenswrapper[5108]: I0202 00:18:20.923313 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e2568caf741572a83d3d444d4f4d6722d2e6e9a09c71f1dec22c400db69da1e"} pod="openshift-machine-config-operator/machine-config-daemon-d74m7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 00:18:20 crc kubenswrapper[5108]: I0202 00:18:20.923478 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" containerID="cri-o://0e2568caf741572a83d3d444d4f4d6722d2e6e9a09c71f1dec22c400db69da1e" gracePeriod=600 Feb 02 00:18:21 crc kubenswrapper[5108]: I0202 00:18:21.811474 5108 generic.go:358] "Generic (PLEG): container finished" podID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerID="0e2568caf741572a83d3d444d4f4d6722d2e6e9a09c71f1dec22c400db69da1e" exitCode=0 Feb 02 00:18:21 crc kubenswrapper[5108]: I0202 00:18:21.811569 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerDied","Data":"0e2568caf741572a83d3d444d4f4d6722d2e6e9a09c71f1dec22c400db69da1e"} Feb 02 00:18:21 crc kubenswrapper[5108]: I0202 00:18:21.811974 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"2f2e9df533cb87396f8d3fd0d1a26fadb3bf2cae351b8b03ee4f3bd210e16a31"} Feb 02 00:18:21 crc kubenswrapper[5108]: I0202 00:18:21.812013 5108 scope.go:117] "RemoveContainer" containerID="7fc8656729a54679c3362014ce0e7b635c6707581fd8f75d82363290e04cf73f" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.134890 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499860-n8hbz"] Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.137014 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="431bfb08-11a6-4c66-893c-650ea32d97b3" containerName="oc" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.137066 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="431bfb08-11a6-4c66-893c-650ea32d97b3" containerName="oc" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.137275 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="431bfb08-11a6-4c66-893c-650ea32d97b3" containerName="oc" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.142341 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.145948 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499860-n8hbz"] Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.146878 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.146927 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.147348 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.256681 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5962b\" (UniqueName: \"kubernetes.io/projected/c1c738be-c891-4aa6-adfd-c1234cf80512-kube-api-access-5962b\") pod \"auto-csr-approver-29499860-n8hbz\" (UID: \"c1c738be-c891-4aa6-adfd-c1234cf80512\") " pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.358422 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5962b\" (UniqueName: \"kubernetes.io/projected/c1c738be-c891-4aa6-adfd-c1234cf80512-kube-api-access-5962b\") pod \"auto-csr-approver-29499860-n8hbz\" (UID: \"c1c738be-c891-4aa6-adfd-c1234cf80512\") " pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.380725 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5962b\" (UniqueName: \"kubernetes.io/projected/c1c738be-c891-4aa6-adfd-c1234cf80512-kube-api-access-5962b\") pod \"auto-csr-approver-29499860-n8hbz\" (UID: \"c1c738be-c891-4aa6-adfd-c1234cf80512\") " pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.472131 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.674029 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499860-n8hbz"] Feb 02 00:20:01 crc kubenswrapper[5108]: I0202 00:20:01.570206 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" event={"ID":"c1c738be-c891-4aa6-adfd-c1234cf80512","Type":"ContainerStarted","Data":"d21f25759a585ac6a1b9f8e54ec2077c9f4fd028ce77db4c07b5381baf4072a2"} Feb 02 00:20:01 crc kubenswrapper[5108]: I0202 00:20:01.818440 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:20:01 crc kubenswrapper[5108]: I0202 00:20:01.818614 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:20:02 crc kubenswrapper[5108]: I0202 00:20:02.623549 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" event={"ID":"c1c738be-c891-4aa6-adfd-c1234cf80512","Type":"ContainerStarted","Data":"4889d1b8838ddcd25d685c454fac6b652c42c5979336992c7b26bb11fe672dbf"} Feb 02 00:20:02 crc kubenswrapper[5108]: I0202 00:20:02.659381 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" podStartSLOduration=1.355852188 podStartE2EDuration="2.659345998s" podCreationTimestamp="2026-02-02 00:20:00 +0000 UTC" firstStartedPulling="2026-02-02 00:20:00.683883691 +0000 UTC m=+599.959380621" lastFinishedPulling="2026-02-02 00:20:01.987377501 +0000 UTC m=+601.262874431" observedRunningTime="2026-02-02 00:20:02.643713064 +0000 UTC m=+601.919210004" watchObservedRunningTime="2026-02-02 00:20:02.659345998 +0000 UTC m=+601.934842938" Feb 02 00:20:03 crc kubenswrapper[5108]: I0202 00:20:03.638048 5108 generic.go:358] "Generic (PLEG): container finished" podID="c1c738be-c891-4aa6-adfd-c1234cf80512" containerID="4889d1b8838ddcd25d685c454fac6b652c42c5979336992c7b26bb11fe672dbf" exitCode=0 Feb 02 00:20:03 crc kubenswrapper[5108]: I0202 00:20:03.638355 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" event={"ID":"c1c738be-c891-4aa6-adfd-c1234cf80512","Type":"ContainerDied","Data":"4889d1b8838ddcd25d685c454fac6b652c42c5979336992c7b26bb11fe672dbf"} Feb 02 00:20:04 crc kubenswrapper[5108]: I0202 00:20:04.941009 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:05 crc kubenswrapper[5108]: I0202 00:20:05.054622 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5962b\" (UniqueName: \"kubernetes.io/projected/c1c738be-c891-4aa6-adfd-c1234cf80512-kube-api-access-5962b\") pod \"c1c738be-c891-4aa6-adfd-c1234cf80512\" (UID: \"c1c738be-c891-4aa6-adfd-c1234cf80512\") " Feb 02 00:20:05 crc kubenswrapper[5108]: I0202 00:20:05.063716 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c738be-c891-4aa6-adfd-c1234cf80512-kube-api-access-5962b" (OuterVolumeSpecName: "kube-api-access-5962b") pod "c1c738be-c891-4aa6-adfd-c1234cf80512" (UID: "c1c738be-c891-4aa6-adfd-c1234cf80512"). InnerVolumeSpecName "kube-api-access-5962b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:20:05 crc kubenswrapper[5108]: I0202 00:20:05.156965 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5962b\" (UniqueName: \"kubernetes.io/projected/c1c738be-c891-4aa6-adfd-c1234cf80512-kube-api-access-5962b\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:05 crc kubenswrapper[5108]: I0202 00:20:05.655271 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" event={"ID":"c1c738be-c891-4aa6-adfd-c1234cf80512","Type":"ContainerDied","Data":"d21f25759a585ac6a1b9f8e54ec2077c9f4fd028ce77db4c07b5381baf4072a2"} Feb 02 00:20:05 crc kubenswrapper[5108]: I0202 00:20:05.655381 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21f25759a585ac6a1b9f8e54ec2077c9f4fd028ce77db4c07b5381baf4072a2" Feb 02 00:20:05 crc kubenswrapper[5108]: I0202 00:20:05.655313 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.354010 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr"] Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.354724 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="kube-rbac-proxy" containerID="cri-o://1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.354778 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="ovnkube-cluster-manager" containerID="cri-o://c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.539361 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-66k84"] Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540093 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-controller" containerID="cri-o://e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540210 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="nbdb" containerID="cri-o://430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540303 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-node" containerID="cri-o://dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540306 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-acl-logging" containerID="cri-o://5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540346 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540384 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="sbdb" containerID="cri-o://af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540239 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="northd" containerID="cri-o://99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.580035 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovnkube-controller" containerID="cri-o://32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.608165 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.652322 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk"] Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653164 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c1c738be-c891-4aa6-adfd-c1234cf80512" containerName="oc" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653192 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c738be-c891-4aa6-adfd-c1234cf80512" containerName="oc" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653282 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="ovnkube-cluster-manager" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653294 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="ovnkube-cluster-manager" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653308 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="kube-rbac-proxy" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653319 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="kube-rbac-proxy" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653452 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="kube-rbac-proxy" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653478 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="ovnkube-cluster-manager" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653496 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c1c738be-c891-4aa6-adfd-c1234cf80512" containerName="oc" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.661125 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.673677 5108 generic.go:358] "Generic (PLEG): container finished" podID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerID="c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017" exitCode=0 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.673702 5108 generic.go:358] "Generic (PLEG): container finished" podID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerID="1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe" exitCode=0 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.674094 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.674391 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" event={"ID":"0298f7da-43a3-48a4-8e32-b772a82bd62d","Type":"ContainerDied","Data":"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017"} Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.674419 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" event={"ID":"0298f7da-43a3-48a4-8e32-b772a82bd62d","Type":"ContainerDied","Data":"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe"} Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.674432 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" event={"ID":"0298f7da-43a3-48a4-8e32-b772a82bd62d","Type":"ContainerDied","Data":"b2c9667b3266dc7724f630d2a6f5b000f311e7134a92929d6e1f8855fc654058"} Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.674454 5108 scope.go:117] "RemoveContainer" containerID="c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.677820 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q22wv_24f8cedc-9b82-4ef7-a7db-4ce57803e0ce/kube-multus/0.log" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.677846 5108 generic.go:358] "Generic (PLEG): container finished" podID="24f8cedc-9b82-4ef7-a7db-4ce57803e0ce" containerID="9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9" exitCode=2 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.677951 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q22wv" event={"ID":"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce","Type":"ContainerDied","Data":"9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9"} Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.680002 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovn-control-plane-metrics-cert\") pod \"0298f7da-43a3-48a4-8e32-b772a82bd62d\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.680082 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovnkube-config\") pod \"0298f7da-43a3-48a4-8e32-b772a82bd62d\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.680599 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsmhb\" (UniqueName: \"kubernetes.io/projected/0298f7da-43a3-48a4-8e32-b772a82bd62d-kube-api-access-rsmhb\") pod \"0298f7da-43a3-48a4-8e32-b772a82bd62d\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.680680 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-env-overrides\") pod \"0298f7da-43a3-48a4-8e32-b772a82bd62d\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.680963 5108 scope.go:117] "RemoveContainer" containerID="9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.681567 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0298f7da-43a3-48a4-8e32-b772a82bd62d" (UID: "0298f7da-43a3-48a4-8e32-b772a82bd62d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.681612 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0298f7da-43a3-48a4-8e32-b772a82bd62d" (UID: "0298f7da-43a3-48a4-8e32-b772a82bd62d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.684487 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.695840 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0298f7da-43a3-48a4-8e32-b772a82bd62d-kube-api-access-rsmhb" (OuterVolumeSpecName: "kube-api-access-rsmhb") pod "0298f7da-43a3-48a4-8e32-b772a82bd62d" (UID: "0298f7da-43a3-48a4-8e32-b772a82bd62d"). InnerVolumeSpecName "kube-api-access-rsmhb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.696858 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "0298f7da-43a3-48a4-8e32-b772a82bd62d" (UID: "0298f7da-43a3-48a4-8e32-b772a82bd62d"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.729468 5108 scope.go:117] "RemoveContainer" containerID="1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.753700 5108 scope.go:117] "RemoveContainer" containerID="c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017" Feb 02 00:20:06 crc kubenswrapper[5108]: E0202 00:20:06.754640 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017\": container with ID starting with c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017 not found: ID does not exist" containerID="c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.754677 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017"} err="failed to get container status \"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017\": rpc error: code = NotFound desc = could not find container \"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017\": container with ID starting with c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017 not found: ID does not exist" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.754698 5108 scope.go:117] "RemoveContainer" containerID="1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe" Feb 02 00:20:06 crc kubenswrapper[5108]: E0202 00:20:06.754969 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe\": container with ID starting with 1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe not found: ID does not exist" containerID="1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.754992 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe"} err="failed to get container status \"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe\": rpc error: code = NotFound desc = could not find container \"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe\": container with ID starting with 1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe not found: ID does not exist" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.755008 5108 scope.go:117] "RemoveContainer" containerID="c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.755280 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017"} err="failed to get container status \"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017\": rpc error: code = NotFound desc = could not find container \"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017\": container with ID starting with c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017 not found: ID does not exist" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.755327 5108 scope.go:117] "RemoveContainer" containerID="1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.759805 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe"} err="failed to get container status \"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe\": rpc error: code = NotFound desc = could not find container \"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe\": container with ID starting with 1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe not found: ID does not exist" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.781947 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n2h9\" (UniqueName: \"kubernetes.io/projected/68ee81b3-e585-46a6-b47c-666f0c3f187f-kube-api-access-8n2h9\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782048 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68ee81b3-e585-46a6-b47c-666f0c3f187f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782093 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68ee81b3-e585-46a6-b47c-666f0c3f187f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782122 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68ee81b3-e585-46a6-b47c-666f0c3f187f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782155 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782166 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rsmhb\" (UniqueName: \"kubernetes.io/projected/0298f7da-43a3-48a4-8e32-b772a82bd62d-kube-api-access-rsmhb\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782177 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782186 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.883566 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68ee81b3-e585-46a6-b47c-666f0c3f187f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.883653 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68ee81b3-e585-46a6-b47c-666f0c3f187f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.883689 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68ee81b3-e585-46a6-b47c-666f0c3f187f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.883711 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8n2h9\" (UniqueName: \"kubernetes.io/projected/68ee81b3-e585-46a6-b47c-666f0c3f187f-kube-api-access-8n2h9\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.884632 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68ee81b3-e585-46a6-b47c-666f0c3f187f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.884978 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68ee81b3-e585-46a6-b47c-666f0c3f187f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.891181 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68ee81b3-e585-46a6-b47c-666f0c3f187f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.902518 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n2h9\" (UniqueName: \"kubernetes.io/projected/68ee81b3-e585-46a6-b47c-666f0c3f187f-kube-api-access-8n2h9\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.965160 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-66k84_d0c5973e-49ea-41a0-87d5-c8e867ee8a66/ovn-acl-logging/0.log" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.965756 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-66k84_d0c5973e-49ea-41a0-87d5-c8e867ee8a66/ovn-controller/0.log" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.966341 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.017803 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.022465 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr"] Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.029710 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr"] Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.034742 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-88x4v"] Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035373 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="sbdb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035395 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="sbdb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035411 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="northd" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035418 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="northd" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035427 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-controller" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035432 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-controller" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035439 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035446 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035458 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-node" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035465 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-node" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035476 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="nbdb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035482 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="nbdb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035491 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovnkube-controller" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035498 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovnkube-controller" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035521 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kubecfg-setup" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035527 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kubecfg-setup" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035538 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-acl-logging" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035544 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-acl-logging" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035643 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-acl-logging" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035656 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-node" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035665 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="northd" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035675 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovnkube-controller" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035685 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-controller" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035708 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035715 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="nbdb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035722 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="sbdb" Feb 02 00:20:07 crc kubenswrapper[5108]: W0202 00:20:07.038534 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68ee81b3_e585_46a6_b47c_666f0c3f187f.slice/crio-5da4f41f2e193c4444f3d8b722f253d9800cfe582ceff9381bc724b5cde0f112 WatchSource:0}: Error finding container 5da4f41f2e193c4444f3d8b722f253d9800cfe582ceff9381bc724b5cde0f112: Status 404 returned error can't find the container with id 5da4f41f2e193c4444f3d8b722f253d9800cfe582ceff9381bc724b5cde0f112 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.046337 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.087982 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-ovn\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088113 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088121 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfgl7\" (UniqueName: \"kubernetes.io/projected/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-kube-api-access-vfgl7\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088209 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-openvswitch\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088254 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-netd\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088282 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-env-overrides\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088299 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088337 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088319 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-ovn-kubernetes\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088342 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088446 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-systemd-units\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088523 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-node-log\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088552 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-netns\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088588 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088682 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-slash\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088644 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-node-log" (OuterVolumeSpecName: "node-log") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088759 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-script-lib\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088787 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-kubelet\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088698 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088813 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-config\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088717 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-slash" (OuterVolumeSpecName: "host-slash") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088856 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088863 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088948 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-bin\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089019 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-log-socket\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088971 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089108 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-log-socket" (OuterVolumeSpecName: "log-socket") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089151 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovn-node-metrics-cert\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089236 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-systemd\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088997 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089247 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089280 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089260 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-var-lib-openvswitch\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089326 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-etc-openvswitch\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089436 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089541 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089571 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089970 5108 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090001 5108 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090019 5108 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090034 5108 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090046 5108 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090057 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090069 5108 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090082 5108 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090093 5108 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090104 5108 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090115 5108 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090127 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090137 5108 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090148 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090160 5108 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090171 5108 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090182 5108 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.093655 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-kube-api-access-vfgl7" (OuterVolumeSpecName: "kube-api-access-vfgl7") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "kube-api-access-vfgl7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.094032 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.112822 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.192301 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-node-log\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.192394 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.192539 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-cni-netd\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.192643 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-cni-bin\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.192694 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-kubelet\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.192730 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-env-overrides\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193066 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193121 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-ovnkube-config\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193177 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-systemd-units\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193203 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6l68\" (UniqueName: \"kubernetes.io/projected/9ea50c71-4688-4245-91de-32018497eac8-kube-api-access-n6l68\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193266 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-ovnkube-script-lib\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193755 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-etc-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193844 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-systemd\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193881 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-log-socket\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193898 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-run-ovn-kubernetes\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193962 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-var-lib-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193992 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-ovn\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.194048 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ea50c71-4688-4245-91de-32018497eac8-ovn-node-metrics-cert\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.194100 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-slash\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.194214 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-run-netns\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.194534 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vfgl7\" (UniqueName: \"kubernetes.io/projected/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-kube-api-access-vfgl7\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.194568 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.194582 5108 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.295913 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-node-log\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.295985 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296006 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-cni-netd\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296022 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-cni-bin\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296042 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-kubelet\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296058 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-env-overrides\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296077 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-node-log\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296137 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296214 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296269 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-cni-netd\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296358 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-ovnkube-config\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296424 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-kubelet\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296493 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-cni-bin\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297012 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-ovnkube-config\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297062 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297123 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-systemd-units\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297148 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-systemd-units\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297175 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n6l68\" (UniqueName: \"kubernetes.io/projected/9ea50c71-4688-4245-91de-32018497eac8-kube-api-access-n6l68\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297196 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-ovnkube-script-lib\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297245 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-etc-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297301 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-systemd\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297335 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-log-socket\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297355 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-run-ovn-kubernetes\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297413 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-var-lib-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297447 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-ovn\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297470 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ea50c71-4688-4245-91de-32018497eac8-ovn-node-metrics-cert\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297499 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-slash\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297517 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-run-netns\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297659 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-env-overrides\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297688 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-log-socket\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297694 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-var-lib-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297675 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-run-ovn-kubernetes\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297750 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-ovn\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297729 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-slash\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297782 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-etc-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.298088 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-systemd\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.298123 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-run-netns\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.301589 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-ovnkube-script-lib\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.307682 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ea50c71-4688-4245-91de-32018497eac8-ovn-node-metrics-cert\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.333490 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6l68\" (UniqueName: \"kubernetes.io/projected/9ea50c71-4688-4245-91de-32018497eac8-kube-api-access-n6l68\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.380938 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: W0202 00:20:07.416584 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ea50c71_4688_4245_91de_32018497eac8.slice/crio-87de97bc7249daffbaaad7798d6efe705d2c2b56a894d785f25de56a585e0c81 WatchSource:0}: Error finding container 87de97bc7249daffbaaad7798d6efe705d2c2b56a894d785f25de56a585e0c81: Status 404 returned error can't find the container with id 87de97bc7249daffbaaad7798d6efe705d2c2b56a894d785f25de56a585e0c81 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.572076 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" path="/var/lib/kubelet/pods/0298f7da-43a3-48a4-8e32-b772a82bd62d/volumes" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.686889 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" event={"ID":"68ee81b3-e585-46a6-b47c-666f0c3f187f","Type":"ContainerStarted","Data":"d43ea4f141e778ce15c7d84c6be8fc1afe568358ebb8a829408c47103ec6b179"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.686937 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" event={"ID":"68ee81b3-e585-46a6-b47c-666f0c3f187f","Type":"ContainerStarted","Data":"ca0b6506433443b50731051676008349603ee2480502143e3963bceceb6c8072"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.686948 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" event={"ID":"68ee81b3-e585-46a6-b47c-666f0c3f187f","Type":"ContainerStarted","Data":"5da4f41f2e193c4444f3d8b722f253d9800cfe582ceff9381bc724b5cde0f112"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691291 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-66k84_d0c5973e-49ea-41a0-87d5-c8e867ee8a66/ovn-acl-logging/0.log" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691705 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-66k84_d0c5973e-49ea-41a0-87d5-c8e867ee8a66/ovn-controller/0.log" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691959 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691976 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691983 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691989 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691995 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692002 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692009 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" exitCode=143 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692016 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" exitCode=143 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692114 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692137 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692149 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692160 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692172 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692182 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692196 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692205 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692210 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692217 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692243 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692250 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692255 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692260 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692266 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692271 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692276 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692282 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692287 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692295 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692303 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692310 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692315 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692321 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692327 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692332 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692337 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692342 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692347 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692353 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"7a2461c6a473f94ba1ea1904c2b0cd4abbd44d50e56c3ab93bba762c867a78ab"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692362 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692368 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692373 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692378 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692383 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692388 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692393 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692398 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692403 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692418 5108 scope.go:117] "RemoveContainer" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692681 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.696164 5108 generic.go:358] "Generic (PLEG): container finished" podID="9ea50c71-4688-4245-91de-32018497eac8" containerID="f23786514e364fed84da6806a7ffc903708b5c196da419cc70977c4987182a7a" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.696273 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerDied","Data":"f23786514e364fed84da6806a7ffc903708b5c196da419cc70977c4987182a7a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.696332 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"87de97bc7249daffbaaad7798d6efe705d2c2b56a894d785f25de56a585e0c81"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.699594 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q22wv_24f8cedc-9b82-4ef7-a7db-4ce57803e0ce/kube-multus/0.log" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.699643 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q22wv" event={"ID":"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce","Type":"ContainerStarted","Data":"406af3ef6372a6e1fc055ce202a3a9c98241fd5d181894169fdb5f42557f16ec"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.715869 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" podStartSLOduration=1.7158488109999999 podStartE2EDuration="1.715848811s" podCreationTimestamp="2026-02-02 00:20:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:20:07.715260656 +0000 UTC m=+606.990757606" watchObservedRunningTime="2026-02-02 00:20:07.715848811 +0000 UTC m=+606.991345741" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.719523 5108 scope.go:117] "RemoveContainer" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.743537 5108 scope.go:117] "RemoveContainer" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.761538 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-66k84"] Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.764043 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-66k84"] Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.780559 5108 scope.go:117] "RemoveContainer" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.808165 5108 scope.go:117] "RemoveContainer" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.825847 5108 scope.go:117] "RemoveContainer" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.839395 5108 scope.go:117] "RemoveContainer" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.855609 5108 scope.go:117] "RemoveContainer" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.873947 5108 scope.go:117] "RemoveContainer" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.891320 5108 scope.go:117] "RemoveContainer" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.896821 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": container with ID starting with 32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb not found: ID does not exist" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.896861 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} err="failed to get container status \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": rpc error: code = NotFound desc = could not find container \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": container with ID starting with 32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.896885 5108 scope.go:117] "RemoveContainer" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.897306 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": container with ID starting with af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba not found: ID does not exist" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.897373 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} err="failed to get container status \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": rpc error: code = NotFound desc = could not find container \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": container with ID starting with af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.897424 5108 scope.go:117] "RemoveContainer" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.897946 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": container with ID starting with 430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913 not found: ID does not exist" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.897975 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} err="failed to get container status \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": rpc error: code = NotFound desc = could not find container \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": container with ID starting with 430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.897991 5108 scope.go:117] "RemoveContainer" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.898277 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": container with ID starting with 99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a not found: ID does not exist" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.898308 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} err="failed to get container status \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": rpc error: code = NotFound desc = could not find container \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": container with ID starting with 99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.898332 5108 scope.go:117] "RemoveContainer" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.898603 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": container with ID starting with 72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a not found: ID does not exist" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.898651 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} err="failed to get container status \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": rpc error: code = NotFound desc = could not find container \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": container with ID starting with 72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.898681 5108 scope.go:117] "RemoveContainer" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.899011 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": container with ID starting with dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde not found: ID does not exist" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.899041 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} err="failed to get container status \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": rpc error: code = NotFound desc = could not find container \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": container with ID starting with dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.899062 5108 scope.go:117] "RemoveContainer" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.899531 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": container with ID starting with 5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54 not found: ID does not exist" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.899558 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} err="failed to get container status \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": rpc error: code = NotFound desc = could not find container \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": container with ID starting with 5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.899578 5108 scope.go:117] "RemoveContainer" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.899821 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": container with ID starting with e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1 not found: ID does not exist" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.899837 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} err="failed to get container status \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": rpc error: code = NotFound desc = could not find container \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": container with ID starting with e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.899850 5108 scope.go:117] "RemoveContainer" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.900113 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": container with ID starting with 44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f not found: ID does not exist" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.900130 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} err="failed to get container status \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": rpc error: code = NotFound desc = could not find container \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": container with ID starting with 44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.900142 5108 scope.go:117] "RemoveContainer" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.901666 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} err="failed to get container status \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": rpc error: code = NotFound desc = could not find container \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": container with ID starting with 32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.901692 5108 scope.go:117] "RemoveContainer" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.901938 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} err="failed to get container status \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": rpc error: code = NotFound desc = could not find container \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": container with ID starting with af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.901957 5108 scope.go:117] "RemoveContainer" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.902245 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} err="failed to get container status \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": rpc error: code = NotFound desc = could not find container \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": container with ID starting with 430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.902266 5108 scope.go:117] "RemoveContainer" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.902544 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} err="failed to get container status \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": rpc error: code = NotFound desc = could not find container \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": container with ID starting with 99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.902568 5108 scope.go:117] "RemoveContainer" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.902888 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} err="failed to get container status \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": rpc error: code = NotFound desc = could not find container \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": container with ID starting with 72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.902909 5108 scope.go:117] "RemoveContainer" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903127 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} err="failed to get container status \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": rpc error: code = NotFound desc = could not find container \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": container with ID starting with dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903146 5108 scope.go:117] "RemoveContainer" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903376 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} err="failed to get container status \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": rpc error: code = NotFound desc = could not find container \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": container with ID starting with 5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903395 5108 scope.go:117] "RemoveContainer" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903682 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} err="failed to get container status \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": rpc error: code = NotFound desc = could not find container \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": container with ID starting with e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903703 5108 scope.go:117] "RemoveContainer" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903994 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} err="failed to get container status \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": rpc error: code = NotFound desc = could not find container \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": container with ID starting with 44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.904029 5108 scope.go:117] "RemoveContainer" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.904400 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} err="failed to get container status \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": rpc error: code = NotFound desc = could not find container \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": container with ID starting with 32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.904424 5108 scope.go:117] "RemoveContainer" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.904658 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} err="failed to get container status \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": rpc error: code = NotFound desc = could not find container \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": container with ID starting with af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.904680 5108 scope.go:117] "RemoveContainer" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.906184 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} err="failed to get container status \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": rpc error: code = NotFound desc = could not find container \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": container with ID starting with 430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.906220 5108 scope.go:117] "RemoveContainer" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.906564 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} err="failed to get container status \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": rpc error: code = NotFound desc = could not find container \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": container with ID starting with 99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.906599 5108 scope.go:117] "RemoveContainer" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.906851 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} err="failed to get container status \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": rpc error: code = NotFound desc = could not find container \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": container with ID starting with 72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.906873 5108 scope.go:117] "RemoveContainer" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.907581 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} err="failed to get container status \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": rpc error: code = NotFound desc = could not find container \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": container with ID starting with dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.907618 5108 scope.go:117] "RemoveContainer" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.907897 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} err="failed to get container status \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": rpc error: code = NotFound desc = could not find container \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": container with ID starting with 5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.907935 5108 scope.go:117] "RemoveContainer" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.908372 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} err="failed to get container status \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": rpc error: code = NotFound desc = could not find container \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": container with ID starting with e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.908407 5108 scope.go:117] "RemoveContainer" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.908899 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} err="failed to get container status \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": rpc error: code = NotFound desc = could not find container \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": container with ID starting with 44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.908950 5108 scope.go:117] "RemoveContainer" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909200 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} err="failed to get container status \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": rpc error: code = NotFound desc = could not find container \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": container with ID starting with 32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909238 5108 scope.go:117] "RemoveContainer" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909460 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} err="failed to get container status \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": rpc error: code = NotFound desc = could not find container \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": container with ID starting with af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909479 5108 scope.go:117] "RemoveContainer" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909694 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} err="failed to get container status \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": rpc error: code = NotFound desc = could not find container \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": container with ID starting with 430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909711 5108 scope.go:117] "RemoveContainer" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909929 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} err="failed to get container status \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": rpc error: code = NotFound desc = could not find container \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": container with ID starting with 99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909949 5108 scope.go:117] "RemoveContainer" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.910213 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} err="failed to get container status \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": rpc error: code = NotFound desc = could not find container \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": container with ID starting with 72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.910260 5108 scope.go:117] "RemoveContainer" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.910555 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} err="failed to get container status \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": rpc error: code = NotFound desc = could not find container \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": container with ID starting with dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.910582 5108 scope.go:117] "RemoveContainer" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.910817 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} err="failed to get container status \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": rpc error: code = NotFound desc = could not find container \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": container with ID starting with 5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.910837 5108 scope.go:117] "RemoveContainer" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.911053 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} err="failed to get container status \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": rpc error: code = NotFound desc = could not find container \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": container with ID starting with e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.911074 5108 scope.go:117] "RemoveContainer" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.912850 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} err="failed to get container status \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": rpc error: code = NotFound desc = could not find container \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": container with ID starting with 44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.912875 5108 scope.go:117] "RemoveContainer" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.913650 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} err="failed to get container status \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": rpc error: code = NotFound desc = could not find container \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": container with ID starting with 32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.913734 5108 scope.go:117] "RemoveContainer" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.914537 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} err="failed to get container status \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": rpc error: code = NotFound desc = could not find container \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": container with ID starting with af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.914563 5108 scope.go:117] "RemoveContainer" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.914961 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} err="failed to get container status \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": rpc error: code = NotFound desc = could not find container \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": container with ID starting with 430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.914999 5108 scope.go:117] "RemoveContainer" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.915811 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} err="failed to get container status \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": rpc error: code = NotFound desc = could not find container \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": container with ID starting with 99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.915832 5108 scope.go:117] "RemoveContainer" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.916483 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} err="failed to get container status \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": rpc error: code = NotFound desc = could not find container \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": container with ID starting with 72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.916527 5108 scope.go:117] "RemoveContainer" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.916962 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} err="failed to get container status \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": rpc error: code = NotFound desc = could not find container \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": container with ID starting with dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde not found: ID does not exist" Feb 02 00:20:08 crc kubenswrapper[5108]: I0202 00:20:08.707628 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"9609f9dbefd49afe34553b2ff7d0ff2adcf2c7e9cf92ab924ac3aca6f0975601"} Feb 02 00:20:08 crc kubenswrapper[5108]: I0202 00:20:08.708122 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"7b587dca3afe45a86c2a1781b6863cb401c6e6c9897d81b60491b42517896bec"} Feb 02 00:20:08 crc kubenswrapper[5108]: I0202 00:20:08.708133 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"586e2b02889aa5a52eb290641469801a2abfd503960e49e3a04449766cd54cba"} Feb 02 00:20:08 crc kubenswrapper[5108]: I0202 00:20:08.708141 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"76beff8c79e081f35c4e907b6e7547b9fe9e2aaaa1ce368968fcee01609ac155"} Feb 02 00:20:08 crc kubenswrapper[5108]: I0202 00:20:08.708149 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"bba4ab8158708ab6919840fd8dcb47d067983480a673033ee09671a2a544a96a"} Feb 02 00:20:08 crc kubenswrapper[5108]: I0202 00:20:08.708158 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"d3aaa098b91666bae033d23ff717732330e1766e040e26e76dd4de0ffc3a107a"} Feb 02 00:20:09 crc kubenswrapper[5108]: I0202 00:20:09.570199 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" path="/var/lib/kubelet/pods/d0c5973e-49ea-41a0-87d5-c8e867ee8a66/volumes" Feb 02 00:20:11 crc kubenswrapper[5108]: I0202 00:20:11.738900 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"4964cc7134072d267bed3957a8780d31bc5847382791d3bf48ddaec539be6182"} Feb 02 00:20:13 crc kubenswrapper[5108]: I0202 00:20:13.761815 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"5319f1393973320f2445163ace38aeba4800d88d1bd4739403799cac20641a48"} Feb 02 00:20:13 crc kubenswrapper[5108]: I0202 00:20:13.762514 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:13 crc kubenswrapper[5108]: I0202 00:20:13.762586 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:13 crc kubenswrapper[5108]: I0202 00:20:13.802443 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" podStartSLOduration=6.802388611 podStartE2EDuration="6.802388611s" podCreationTimestamp="2026-02-02 00:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:20:13.797675039 +0000 UTC m=+613.073171969" watchObservedRunningTime="2026-02-02 00:20:13.802388611 +0000 UTC m=+613.077885541" Feb 02 00:20:13 crc kubenswrapper[5108]: I0202 00:20:13.803256 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:14 crc kubenswrapper[5108]: I0202 00:20:14.773469 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:14 crc kubenswrapper[5108]: I0202 00:20:14.820688 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:46 crc kubenswrapper[5108]: I0202 00:20:46.826831 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:50 crc kubenswrapper[5108]: I0202 00:20:50.919721 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:20:50 crc kubenswrapper[5108]: I0202 00:20:50.920130 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.154494 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckv4"] Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.155733 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cckv4" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="registry-server" containerID="cri-o://428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8" gracePeriod=30 Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.571443 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.737529 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4ntj\" (UniqueName: \"kubernetes.io/projected/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-kube-api-access-c4ntj\") pod \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.737606 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-catalog-content\") pod \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.737668 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-utilities\") pod \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.740980 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-utilities" (OuterVolumeSpecName: "utilities") pod "5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" (UID: "5cf96b4d-fc9a-4ed1-9383-fb367f5a05de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.747500 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-kube-api-access-c4ntj" (OuterVolumeSpecName: "kube-api-access-c4ntj") pod "5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" (UID: "5cf96b4d-fc9a-4ed1-9383-fb367f5a05de"). InnerVolumeSpecName "kube-api-access-c4ntj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.750950 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" (UID: "5cf96b4d-fc9a-4ed1-9383-fb367f5a05de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.840571 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c4ntj\" (UniqueName: \"kubernetes.io/projected/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-kube-api-access-c4ntj\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.840617 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.840627 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.234855 5108 generic.go:358] "Generic (PLEG): container finished" podID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerID="428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8" exitCode=0 Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.234931 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckv4" event={"ID":"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de","Type":"ContainerDied","Data":"428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8"} Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.234978 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckv4" event={"ID":"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de","Type":"ContainerDied","Data":"8f80f46a1e430bbf0bdd470106ede3f5f57d87904d6e8abf62bdcd95557040b0"} Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.234981 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.234997 5108 scope.go:117] "RemoveContainer" containerID="428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.261449 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckv4"] Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.265292 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckv4"] Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.268468 5108 scope.go:117] "RemoveContainer" containerID="c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.305152 5108 scope.go:117] "RemoveContainer" containerID="66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.320794 5108 scope.go:117] "RemoveContainer" containerID="428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8" Feb 02 00:21:13 crc kubenswrapper[5108]: E0202 00:21:13.321494 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8\": container with ID starting with 428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8 not found: ID does not exist" containerID="428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.321527 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8"} err="failed to get container status \"428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8\": rpc error: code = NotFound desc = could not find container \"428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8\": container with ID starting with 428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8 not found: ID does not exist" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.321551 5108 scope.go:117] "RemoveContainer" containerID="c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c" Feb 02 00:21:13 crc kubenswrapper[5108]: E0202 00:21:13.322191 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c\": container with ID starting with c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c not found: ID does not exist" containerID="c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.322217 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c"} err="failed to get container status \"c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c\": rpc error: code = NotFound desc = could not find container \"c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c\": container with ID starting with c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c not found: ID does not exist" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.322262 5108 scope.go:117] "RemoveContainer" containerID="66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0" Feb 02 00:21:13 crc kubenswrapper[5108]: E0202 00:21:13.322533 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0\": container with ID starting with 66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0 not found: ID does not exist" containerID="66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.322554 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0"} err="failed to get container status \"66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0\": rpc error: code = NotFound desc = could not find container \"66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0\": container with ID starting with 66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0 not found: ID does not exist" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.567833 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" path="/var/lib/kubelet/pods/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de/volumes" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.891940 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb"] Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893183 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="extract-utilities" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893268 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="extract-utilities" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893311 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="registry-server" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893328 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="registry-server" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893376 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="extract-content" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893394 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="extract-content" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893616 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="registry-server" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.900078 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.904093 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.906996 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb"] Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.084920 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.084998 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.085058 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk6k8\" (UniqueName: \"kubernetes.io/projected/3b577ebd-ea5b-4c70-b43d-826f4ea87884-kube-api-access-lk6k8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.186488 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.186548 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.186822 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lk6k8\" (UniqueName: \"kubernetes.io/projected/3b577ebd-ea5b-4c70-b43d-826f4ea87884-kube-api-access-lk6k8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.187062 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.187098 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.209530 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk6k8\" (UniqueName: \"kubernetes.io/projected/3b577ebd-ea5b-4c70-b43d-826f4ea87884-kube-api-access-lk6k8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.220614 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.472087 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb"] Feb 02 00:21:17 crc kubenswrapper[5108]: I0202 00:21:17.267577 5108 generic.go:358] "Generic (PLEG): container finished" podID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerID="c0e77d5c881bd16da700dc8c585be4c30d3a4c7939538a230b08090258a9f793" exitCode=0 Feb 02 00:21:17 crc kubenswrapper[5108]: I0202 00:21:17.267684 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" event={"ID":"3b577ebd-ea5b-4c70-b43d-826f4ea87884","Type":"ContainerDied","Data":"c0e77d5c881bd16da700dc8c585be4c30d3a4c7939538a230b08090258a9f793"} Feb 02 00:21:17 crc kubenswrapper[5108]: I0202 00:21:17.268208 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" event={"ID":"3b577ebd-ea5b-4c70-b43d-826f4ea87884","Type":"ContainerStarted","Data":"cc8d25dea57e7e52a5d788f8c0e53956ed52d0364567e99eae8fc75630fe7ca9"} Feb 02 00:21:19 crc kubenswrapper[5108]: I0202 00:21:19.284995 5108 generic.go:358] "Generic (PLEG): container finished" podID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerID="c2e94d842157cd23f78b3f813a79398dd69be41be0d83e88b3b4d9d1b59a07e8" exitCode=0 Feb 02 00:21:19 crc kubenswrapper[5108]: I0202 00:21:19.285127 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" event={"ID":"3b577ebd-ea5b-4c70-b43d-826f4ea87884","Type":"ContainerDied","Data":"c2e94d842157cd23f78b3f813a79398dd69be41be0d83e88b3b4d9d1b59a07e8"} Feb 02 00:21:20 crc kubenswrapper[5108]: I0202 00:21:20.293348 5108 generic.go:358] "Generic (PLEG): container finished" podID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerID="9517f4486885d0e23ba040c8061ca727ac2c30500d7f28233a8136c672fbaa25" exitCode=0 Feb 02 00:21:20 crc kubenswrapper[5108]: I0202 00:21:20.293433 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" event={"ID":"3b577ebd-ea5b-4c70-b43d-826f4ea87884","Type":"ContainerDied","Data":"9517f4486885d0e23ba040c8061ca727ac2c30500d7f28233a8136c672fbaa25"} Feb 02 00:21:20 crc kubenswrapper[5108]: I0202 00:21:20.919169 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:21:20 crc kubenswrapper[5108]: I0202 00:21:20.919322 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.550312 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.674178 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-util\") pod \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.674256 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk6k8\" (UniqueName: \"kubernetes.io/projected/3b577ebd-ea5b-4c70-b43d-826f4ea87884-kube-api-access-lk6k8\") pod \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.674307 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-bundle\") pod \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.676744 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-bundle" (OuterVolumeSpecName: "bundle") pod "3b577ebd-ea5b-4c70-b43d-826f4ea87884" (UID: "3b577ebd-ea5b-4c70-b43d-826f4ea87884"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.681025 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b577ebd-ea5b-4c70-b43d-826f4ea87884-kube-api-access-lk6k8" (OuterVolumeSpecName: "kube-api-access-lk6k8") pod "3b577ebd-ea5b-4c70-b43d-826f4ea87884" (UID: "3b577ebd-ea5b-4c70-b43d-826f4ea87884"). InnerVolumeSpecName "kube-api-access-lk6k8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.685983 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-util" (OuterVolumeSpecName: "util") pod "3b577ebd-ea5b-4c70-b43d-826f4ea87884" (UID: "3b577ebd-ea5b-4c70-b43d-826f4ea87884"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.775568 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-util\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.775601 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lk6k8\" (UniqueName: \"kubernetes.io/projected/3b577ebd-ea5b-4c70-b43d-826f4ea87884-kube-api-access-lk6k8\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.775659 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.306916 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" event={"ID":"3b577ebd-ea5b-4c70-b43d-826f4ea87884","Type":"ContainerDied","Data":"cc8d25dea57e7e52a5d788f8c0e53956ed52d0364567e99eae8fc75630fe7ca9"} Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.306956 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc8d25dea57e7e52a5d788f8c0e53956ed52d0364567e99eae8fc75630fe7ca9" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.306996 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.487555 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95"] Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488599 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="util" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488635 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="util" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488670 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="extract" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488683 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="extract" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488707 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="pull" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488724 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="pull" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488923 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="extract" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.501215 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95"] Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.501365 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.506438 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.587071 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgms\" (UniqueName: \"kubernetes.io/projected/2a27ac25-eac0-4877-a439-99fd1b7ea671-kube-api-access-qtgms\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.588463 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.589392 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.690589 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgms\" (UniqueName: \"kubernetes.io/projected/2a27ac25-eac0-4877-a439-99fd1b7ea671-kube-api-access-qtgms\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.690698 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.690740 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.691656 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.692782 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.718749 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgms\" (UniqueName: \"kubernetes.io/projected/2a27ac25-eac0-4877-a439-99fd1b7ea671-kube-api-access-qtgms\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.818262 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:23 crc kubenswrapper[5108]: I0202 00:21:23.260770 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95"] Feb 02 00:21:23 crc kubenswrapper[5108]: W0202 00:21:23.263569 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a27ac25_eac0_4877_a439_99fd1b7ea671.slice/crio-2782acecc1545acbe0116b664ac6f359ffc5d68d2cbae80e0b8b7da820f75a0b WatchSource:0}: Error finding container 2782acecc1545acbe0116b664ac6f359ffc5d68d2cbae80e0b8b7da820f75a0b: Status 404 returned error can't find the container with id 2782acecc1545acbe0116b664ac6f359ffc5d68d2cbae80e0b8b7da820f75a0b Feb 02 00:21:23 crc kubenswrapper[5108]: I0202 00:21:23.315906 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" event={"ID":"2a27ac25-eac0-4877-a439-99fd1b7ea671","Type":"ContainerStarted","Data":"2782acecc1545acbe0116b664ac6f359ffc5d68d2cbae80e0b8b7da820f75a0b"} Feb 02 00:21:24 crc kubenswrapper[5108]: I0202 00:21:24.332672 5108 generic.go:358] "Generic (PLEG): container finished" podID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerID="1317710cb20fa54818ef19c864a91d464a6c5b33a084db965d81c67d653503b1" exitCode=0 Feb 02 00:21:24 crc kubenswrapper[5108]: I0202 00:21:24.332747 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" event={"ID":"2a27ac25-eac0-4877-a439-99fd1b7ea671","Type":"ContainerDied","Data":"1317710cb20fa54818ef19c864a91d464a6c5b33a084db965d81c67d653503b1"} Feb 02 00:21:25 crc kubenswrapper[5108]: I0202 00:21:25.341658 5108 generic.go:358] "Generic (PLEG): container finished" podID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerID="9916d02f84396cc9813a4fd83613bfb8d021c6ef22c14ded40cdbd8a6b033881" exitCode=0 Feb 02 00:21:25 crc kubenswrapper[5108]: I0202 00:21:25.341831 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" event={"ID":"2a27ac25-eac0-4877-a439-99fd1b7ea671","Type":"ContainerDied","Data":"9916d02f84396cc9813a4fd83613bfb8d021c6ef22c14ded40cdbd8a6b033881"} Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.307540 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk"] Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.313415 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.320907 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk"] Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.335516 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.335581 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld9jr\" (UniqueName: \"kubernetes.io/projected/7fedf68a-9fd7-4344-b2d4-7856f539c455-kube-api-access-ld9jr\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.335715 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.349854 5108 generic.go:358] "Generic (PLEG): container finished" podID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerID="1bff12ef695180ab2eeb1b7c1cbf67c00db6f4fcaa938091baae8f24ac5a2fa0" exitCode=0 Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.350024 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" event={"ID":"2a27ac25-eac0-4877-a439-99fd1b7ea671","Type":"ContainerDied","Data":"1bff12ef695180ab2eeb1b7c1cbf67c00db6f4fcaa938091baae8f24ac5a2fa0"} Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.437193 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.437274 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ld9jr\" (UniqueName: \"kubernetes.io/projected/7fedf68a-9fd7-4344-b2d4-7856f539c455-kube-api-access-ld9jr\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.437312 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.437904 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.438142 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.462185 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld9jr\" (UniqueName: \"kubernetes.io/projected/7fedf68a-9fd7-4344-b2d4-7856f539c455-kube-api-access-ld9jr\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.627833 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.067422 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk"] Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.361878 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerStarted","Data":"333342191adc16bebef36b3b962a53cf0d69d89e809bdebb05023d5962f489b9"} Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.361946 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerStarted","Data":"fb3a14f5d6a6333e1bb81a6cf5ce121a5e4fa213dad0722af9a09a718dd82c63"} Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.792423 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.859713 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-bundle\") pod \"2a27ac25-eac0-4877-a439-99fd1b7ea671\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.859870 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtgms\" (UniqueName: \"kubernetes.io/projected/2a27ac25-eac0-4877-a439-99fd1b7ea671-kube-api-access-qtgms\") pod \"2a27ac25-eac0-4877-a439-99fd1b7ea671\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.859932 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-util\") pod \"2a27ac25-eac0-4877-a439-99fd1b7ea671\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.866110 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-bundle" (OuterVolumeSpecName: "bundle") pod "2a27ac25-eac0-4877-a439-99fd1b7ea671" (UID: "2a27ac25-eac0-4877-a439-99fd1b7ea671"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.882606 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a27ac25-eac0-4877-a439-99fd1b7ea671-kube-api-access-qtgms" (OuterVolumeSpecName: "kube-api-access-qtgms") pod "2a27ac25-eac0-4877-a439-99fd1b7ea671" (UID: "2a27ac25-eac0-4877-a439-99fd1b7ea671"). InnerVolumeSpecName "kube-api-access-qtgms". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.883715 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-util" (OuterVolumeSpecName: "util") pod "2a27ac25-eac0-4877-a439-99fd1b7ea671" (UID: "2a27ac25-eac0-4877-a439-99fd1b7ea671"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.961912 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.961965 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qtgms\" (UniqueName: \"kubernetes.io/projected/2a27ac25-eac0-4877-a439-99fd1b7ea671-kube-api-access-qtgms\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.961976 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-util\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:28 crc kubenswrapper[5108]: I0202 00:21:28.374580 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:28 crc kubenswrapper[5108]: I0202 00:21:28.374624 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" event={"ID":"2a27ac25-eac0-4877-a439-99fd1b7ea671","Type":"ContainerDied","Data":"2782acecc1545acbe0116b664ac6f359ffc5d68d2cbae80e0b8b7da820f75a0b"} Feb 02 00:21:28 crc kubenswrapper[5108]: I0202 00:21:28.374684 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2782acecc1545acbe0116b664ac6f359ffc5d68d2cbae80e0b8b7da820f75a0b" Feb 02 00:21:28 crc kubenswrapper[5108]: I0202 00:21:28.376636 5108 generic.go:358] "Generic (PLEG): container finished" podID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerID="333342191adc16bebef36b3b962a53cf0d69d89e809bdebb05023d5962f489b9" exitCode=0 Feb 02 00:21:28 crc kubenswrapper[5108]: I0202 00:21:28.376707 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerDied","Data":"333342191adc16bebef36b3b962a53cf0d69d89e809bdebb05023d5962f489b9"} Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.784378 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6"] Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786197 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="pull" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786216 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="pull" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786260 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="util" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786267 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="util" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786289 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="extract" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786296 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="extract" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786398 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="extract" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.822085 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6"] Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.822287 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.826695 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.827695 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-dqcjz\"" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.828498 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.929361 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld"] Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.938162 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.939467 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2hh\" (UniqueName: \"kubernetes.io/projected/3cae4b55-dd8b-41da-85fd-e3a48cd48a84-kube-api-access-wb2hh\") pod \"obo-prometheus-operator-9bc85b4bf-qx2r6\" (UID: \"3cae4b55-dd8b-41da-85fd-e3a48cd48a84\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.940348 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8"] Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.942455 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.942762 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-vjcrz\"" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.944196 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.951082 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8"] Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.958417 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.032714 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-tdjm6"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.037625 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.040787 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.041942 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2hh\" (UniqueName: \"kubernetes.io/projected/3cae4b55-dd8b-41da-85fd-e3a48cd48a84-kube-api-access-wb2hh\") pod \"obo-prometheus-operator-9bc85b4bf-qx2r6\" (UID: \"3cae4b55-dd8b-41da-85fd-e3a48cd48a84\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.042007 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b30b62b-4640-4186-8cec-9a4bce652c54-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld\" (UID: \"7b30b62b-4640-4186-8cec-9a4bce652c54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.042070 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b30b62b-4640-4186-8cec-9a4bce652c54-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld\" (UID: \"7b30b62b-4640-4186-8cec-9a4bce652c54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.042096 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea610d63-cdca-43f6-ae36-1021a5cfb158-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8\" (UID: \"ea610d63-cdca-43f6-ae36-1021a5cfb158\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.042265 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea610d63-cdca-43f6-ae36-1021a5cfb158-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8\" (UID: \"ea610d63-cdca-43f6-ae36-1021a5cfb158\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.044818 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-jclnh\"" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.070190 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2hh\" (UniqueName: \"kubernetes.io/projected/3cae4b55-dd8b-41da-85fd-e3a48cd48a84-kube-api-access-wb2hh\") pod \"obo-prometheus-operator-9bc85b4bf-qx2r6\" (UID: \"3cae4b55-dd8b-41da-85fd-e3a48cd48a84\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.077614 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-tdjm6"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.143458 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b30b62b-4640-4186-8cec-9a4bce652c54-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld\" (UID: \"7b30b62b-4640-4186-8cec-9a4bce652c54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.143498 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea610d63-cdca-43f6-ae36-1021a5cfb158-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8\" (UID: \"ea610d63-cdca-43f6-ae36-1021a5cfb158\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.143541 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7e0bd1-72e0-4772-a2cf-8287051d3acd-observability-operator-tls\") pod \"observability-operator-85c68dddb-tdjm6\" (UID: \"6b7e0bd1-72e0-4772-a2cf-8287051d3acd\") " pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.143571 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmtd\" (UniqueName: \"kubernetes.io/projected/6b7e0bd1-72e0-4772-a2cf-8287051d3acd-kube-api-access-2cmtd\") pod \"observability-operator-85c68dddb-tdjm6\" (UID: \"6b7e0bd1-72e0-4772-a2cf-8287051d3acd\") " pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.143597 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea610d63-cdca-43f6-ae36-1021a5cfb158-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8\" (UID: \"ea610d63-cdca-43f6-ae36-1021a5cfb158\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.143689 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b30b62b-4640-4186-8cec-9a4bce652c54-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld\" (UID: \"7b30b62b-4640-4186-8cec-9a4bce652c54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.149188 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b30b62b-4640-4186-8cec-9a4bce652c54-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld\" (UID: \"7b30b62b-4640-4186-8cec-9a4bce652c54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.150934 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b30b62b-4640-4186-8cec-9a4bce652c54-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld\" (UID: \"7b30b62b-4640-4186-8cec-9a4bce652c54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.151482 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea610d63-cdca-43f6-ae36-1021a5cfb158-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8\" (UID: \"ea610d63-cdca-43f6-ae36-1021a5cfb158\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.151872 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea610d63-cdca-43f6-ae36-1021a5cfb158-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8\" (UID: \"ea610d63-cdca-43f6-ae36-1021a5cfb158\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.156897 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.241758 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-twmfp"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.244993 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7e0bd1-72e0-4772-a2cf-8287051d3acd-observability-operator-tls\") pod \"observability-operator-85c68dddb-tdjm6\" (UID: \"6b7e0bd1-72e0-4772-a2cf-8287051d3acd\") " pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.245211 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2cmtd\" (UniqueName: \"kubernetes.io/projected/6b7e0bd1-72e0-4772-a2cf-8287051d3acd-kube-api-access-2cmtd\") pod \"observability-operator-85c68dddb-tdjm6\" (UID: \"6b7e0bd1-72e0-4772-a2cf-8287051d3acd\") " pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.249273 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.250171 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7e0bd1-72e0-4772-a2cf-8287051d3acd-observability-operator-tls\") pod \"observability-operator-85c68dddb-tdjm6\" (UID: \"6b7e0bd1-72e0-4772-a2cf-8287051d3acd\") " pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.254638 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-dk6cv\"" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.265738 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.267038 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cmtd\" (UniqueName: \"kubernetes.io/projected/6b7e0bd1-72e0-4772-a2cf-8287051d3acd-kube-api-access-2cmtd\") pod \"observability-operator-85c68dddb-tdjm6\" (UID: \"6b7e0bd1-72e0-4772-a2cf-8287051d3acd\") " pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.271189 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-twmfp"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.280157 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.347395 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvxrx\" (UniqueName: \"kubernetes.io/projected/600911fd-7824-48ed-a826-60768dce689a-kube-api-access-jvxrx\") pod \"perses-operator-669c9f96b5-twmfp\" (UID: \"600911fd-7824-48ed-a826-60768dce689a\") " pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.347474 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/600911fd-7824-48ed-a826-60768dce689a-openshift-service-ca\") pod \"perses-operator-669c9f96b5-twmfp\" (UID: \"600911fd-7824-48ed-a826-60768dce689a\") " pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.368186 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.447021 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerStarted","Data":"6aff36c0ed2c2bc19c286f270a763c69381116735c7a583fda4be9f55c1e84c3"} Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.448145 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/600911fd-7824-48ed-a826-60768dce689a-openshift-service-ca\") pod \"perses-operator-669c9f96b5-twmfp\" (UID: \"600911fd-7824-48ed-a826-60768dce689a\") " pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.448219 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jvxrx\" (UniqueName: \"kubernetes.io/projected/600911fd-7824-48ed-a826-60768dce689a-kube-api-access-jvxrx\") pod \"perses-operator-669c9f96b5-twmfp\" (UID: \"600911fd-7824-48ed-a826-60768dce689a\") " pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.449556 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/600911fd-7824-48ed-a826-60768dce689a-openshift-service-ca\") pod \"perses-operator-669c9f96b5-twmfp\" (UID: \"600911fd-7824-48ed-a826-60768dce689a\") " pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.479686 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvxrx\" (UniqueName: \"kubernetes.io/projected/600911fd-7824-48ed-a826-60768dce689a-kube-api-access-jvxrx\") pod \"perses-operator-669c9f96b5-twmfp\" (UID: \"600911fd-7824-48ed-a826-60768dce689a\") " pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.608367 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.776421 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.804337 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.923748 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-tdjm6"] Feb 02 00:21:33 crc kubenswrapper[5108]: W0202 00:21:33.946432 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b7e0bd1_72e0_4772_a2cf_8287051d3acd.slice/crio-5074936e087a0b8f1cfc36729e2c3647f8c9d8faaca2eeefc0bfff6014e57107 WatchSource:0}: Error finding container 5074936e087a0b8f1cfc36729e2c3647f8c9d8faaca2eeefc0bfff6014e57107: Status 404 returned error can't find the container with id 5074936e087a0b8f1cfc36729e2c3647f8c9d8faaca2eeefc0bfff6014e57107 Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.016629 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld"] Feb 02 00:21:34 crc kubenswrapper[5108]: W0202 00:21:34.023011 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b30b62b_4640_4186_8cec_9a4bce652c54.slice/crio-c434081c0d72466ce55aea80e7d278e8578862dc7a4ab206c90c22e34400aa94 WatchSource:0}: Error finding container c434081c0d72466ce55aea80e7d278e8578862dc7a4ab206c90c22e34400aa94: Status 404 returned error can't find the container with id c434081c0d72466ce55aea80e7d278e8578862dc7a4ab206c90c22e34400aa94 Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.029713 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-twmfp"] Feb 02 00:21:34 crc kubenswrapper[5108]: W0202 00:21:34.045111 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod600911fd_7824_48ed_a826_60768dce689a.slice/crio-a6676f5998611b2fde0d579a2cfc6d2fdeb240f104f1ec6f328474358dd6fa39 WatchSource:0}: Error finding container a6676f5998611b2fde0d579a2cfc6d2fdeb240f104f1ec6f328474358dd6fa39: Status 404 returned error can't find the container with id a6676f5998611b2fde0d579a2cfc6d2fdeb240f104f1ec6f328474358dd6fa39 Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.464765 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" event={"ID":"ea610d63-cdca-43f6-ae36-1021a5cfb158","Type":"ContainerStarted","Data":"a5fc15f70e97a6fe834548387adc8d6465cf96c0a47f06841dbc0e0d2861da35"} Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.466855 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" event={"ID":"3cae4b55-dd8b-41da-85fd-e3a48cd48a84","Type":"ContainerStarted","Data":"1559c49a4ea838d468316facffb55760f3175a55f128844461b7cfae7ed87357"} Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.467686 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" event={"ID":"6b7e0bd1-72e0-4772-a2cf-8287051d3acd","Type":"ContainerStarted","Data":"5074936e087a0b8f1cfc36729e2c3647f8c9d8faaca2eeefc0bfff6014e57107"} Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.470439 5108 generic.go:358] "Generic (PLEG): container finished" podID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerID="6aff36c0ed2c2bc19c286f270a763c69381116735c7a583fda4be9f55c1e84c3" exitCode=0 Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.470633 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerDied","Data":"6aff36c0ed2c2bc19c286f270a763c69381116735c7a583fda4be9f55c1e84c3"} Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.474026 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" event={"ID":"7b30b62b-4640-4186-8cec-9a4bce652c54","Type":"ContainerStarted","Data":"c434081c0d72466ce55aea80e7d278e8578862dc7a4ab206c90c22e34400aa94"} Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.477108 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" event={"ID":"600911fd-7824-48ed-a826-60768dce689a","Type":"ContainerStarted","Data":"a6676f5998611b2fde0d579a2cfc6d2fdeb240f104f1ec6f328474358dd6fa39"} Feb 02 00:21:35 crc kubenswrapper[5108]: I0202 00:21:35.491930 5108 generic.go:358] "Generic (PLEG): container finished" podID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerID="389b38f0f5835f63e2beb4147aa5d526ede7fa13341eee189f8e868c666c3262" exitCode=0 Feb 02 00:21:35 crc kubenswrapper[5108]: I0202 00:21:35.492077 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerDied","Data":"389b38f0f5835f63e2beb4147aa5d526ede7fa13341eee189f8e868c666c3262"} Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.493154 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-7b74cb5c57-cx5qg"] Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.499303 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.508644 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.508650 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.509030 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.511091 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-xvmlt\"" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.512601 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7b74cb5c57-cx5qg"] Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.626704 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5q8j\" (UniqueName: \"kubernetes.io/projected/dbc6504f-e1af-4747-a2b1-3260272984f3-kube-api-access-b5q8j\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.626762 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc6504f-e1af-4747-a2b1-3260272984f3-webhook-cert\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.626859 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbc6504f-e1af-4747-a2b1-3260272984f3-apiservice-cert\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.729728 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b5q8j\" (UniqueName: \"kubernetes.io/projected/dbc6504f-e1af-4747-a2b1-3260272984f3-kube-api-access-b5q8j\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.730362 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc6504f-e1af-4747-a2b1-3260272984f3-webhook-cert\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.730395 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbc6504f-e1af-4747-a2b1-3260272984f3-apiservice-cert\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.742041 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbc6504f-e1af-4747-a2b1-3260272984f3-apiservice-cert\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.751431 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5q8j\" (UniqueName: \"kubernetes.io/projected/dbc6504f-e1af-4747-a2b1-3260272984f3-kube-api-access-b5q8j\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.756168 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc6504f-e1af-4747-a2b1-3260272984f3-webhook-cert\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.831723 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.892322 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.036961 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld9jr\" (UniqueName: \"kubernetes.io/projected/7fedf68a-9fd7-4344-b2d4-7856f539c455-kube-api-access-ld9jr\") pod \"7fedf68a-9fd7-4344-b2d4-7856f539c455\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.037067 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-bundle\") pod \"7fedf68a-9fd7-4344-b2d4-7856f539c455\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.037088 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-util\") pod \"7fedf68a-9fd7-4344-b2d4-7856f539c455\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.038129 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-bundle" (OuterVolumeSpecName: "bundle") pod "7fedf68a-9fd7-4344-b2d4-7856f539c455" (UID: "7fedf68a-9fd7-4344-b2d4-7856f539c455"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.049482 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fedf68a-9fd7-4344-b2d4-7856f539c455-kube-api-access-ld9jr" (OuterVolumeSpecName: "kube-api-access-ld9jr") pod "7fedf68a-9fd7-4344-b2d4-7856f539c455" (UID: "7fedf68a-9fd7-4344-b2d4-7856f539c455"). InnerVolumeSpecName "kube-api-access-ld9jr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.064568 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-util" (OuterVolumeSpecName: "util") pod "7fedf68a-9fd7-4344-b2d4-7856f539c455" (UID: "7fedf68a-9fd7-4344-b2d4-7856f539c455"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.138301 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ld9jr\" (UniqueName: \"kubernetes.io/projected/7fedf68a-9fd7-4344-b2d4-7856f539c455-kube-api-access-ld9jr\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.138731 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.138740 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-util\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.352420 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7b74cb5c57-cx5qg"] Feb 02 00:21:37 crc kubenswrapper[5108]: W0202 00:21:37.358502 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbc6504f_e1af_4747_a2b1_3260272984f3.slice/crio-78ab70775d4d9d49f222a8d5a28a927a907b417a8425d4dfcbc5e01c1a77eab9 WatchSource:0}: Error finding container 78ab70775d4d9d49f222a8d5a28a927a907b417a8425d4dfcbc5e01c1a77eab9: Status 404 returned error can't find the container with id 78ab70775d4d9d49f222a8d5a28a927a907b417a8425d4dfcbc5e01c1a77eab9 Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.550276 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" event={"ID":"dbc6504f-e1af-4747-a2b1-3260272984f3","Type":"ContainerStarted","Data":"78ab70775d4d9d49f222a8d5a28a927a907b417a8425d4dfcbc5e01c1a77eab9"} Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.557193 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.571118 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerDied","Data":"fb3a14f5d6a6333e1bb81a6cf5ce121a5e4fa213dad0722af9a09a718dd82c63"} Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.571189 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb3a14f5d6a6333e1bb81a6cf5ce121a5e4fa213dad0722af9a09a718dd82c63" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.166419 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc"] Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167586 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="pull" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167602 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="pull" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167623 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="extract" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167628 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="extract" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167652 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="util" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167658 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="util" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167748 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="extract" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.178168 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.180463 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.180735 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-7zlpp\"" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.182044 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.182825 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc"] Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.278955 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1820eeba-be2c-4340-843a-2caf82b3b450-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-kqmcc\" (UID: \"1820eeba-be2c-4340-843a-2caf82b3b450\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.279049 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqds8\" (UniqueName: \"kubernetes.io/projected/1820eeba-be2c-4340-843a-2caf82b3b450-kube-api-access-wqds8\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-kqmcc\" (UID: \"1820eeba-be2c-4340-843a-2caf82b3b450\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.380286 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wqds8\" (UniqueName: \"kubernetes.io/projected/1820eeba-be2c-4340-843a-2caf82b3b450-kube-api-access-wqds8\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-kqmcc\" (UID: \"1820eeba-be2c-4340-843a-2caf82b3b450\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.380402 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1820eeba-be2c-4340-843a-2caf82b3b450-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-kqmcc\" (UID: \"1820eeba-be2c-4340-843a-2caf82b3b450\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.380952 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1820eeba-be2c-4340-843a-2caf82b3b450-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-kqmcc\" (UID: \"1820eeba-be2c-4340-843a-2caf82b3b450\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.404661 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqds8\" (UniqueName: \"kubernetes.io/projected/1820eeba-be2c-4340-843a-2caf82b3b450-kube-api-access-wqds8\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-kqmcc\" (UID: \"1820eeba-be2c-4340-843a-2caf82b3b450\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.497406 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.687159 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" event={"ID":"ea610d63-cdca-43f6-ae36-1021a5cfb158","Type":"ContainerStarted","Data":"efd0c5cb3d39595715958b29b2ffb4a011b6e94ae5f101156c8b5196922cf11d"} Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.693414 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" event={"ID":"3cae4b55-dd8b-41da-85fd-e3a48cd48a84","Type":"ContainerStarted","Data":"c285341b5f1d6b8da0f004554563d75c71b92ab7d272e55d2c8fc110cb5a5117"} Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.695424 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" event={"ID":"dbc6504f-e1af-4747-a2b1-3260272984f3","Type":"ContainerStarted","Data":"0937438871d36d99ad44e8724196c5684f8a83c31e378b90c7ed3de2cf3afcfc"} Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.698352 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" event={"ID":"6b7e0bd1-72e0-4772-a2cf-8287051d3acd","Type":"ContainerStarted","Data":"2e7cd1e77f2c6747ffbe1253b03f9e102710a9f35bb7655694993b33c0de9294"} Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.699160 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.700745 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" event={"ID":"7b30b62b-4640-4186-8cec-9a4bce652c54","Type":"ContainerStarted","Data":"7b6593954e2ba51932bb7bf877bb36be526a0ef9a5d8ecd1da93dfdcc5cb0540"} Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.702052 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.715862 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" event={"ID":"600911fd-7824-48ed-a826-60768dce689a","Type":"ContainerStarted","Data":"f61ca2f8995d7c71c4a4094622ea8f95ff364d27c81203b4281d2bd9612d4a40"} Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.716083 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.722277 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" podStartSLOduration=2.879958055 podStartE2EDuration="17.722254706s" podCreationTimestamp="2026-02-02 00:21:32 +0000 UTC" firstStartedPulling="2026-02-02 00:21:33.842879659 +0000 UTC m=+693.118376589" lastFinishedPulling="2026-02-02 00:21:48.68517631 +0000 UTC m=+707.960673240" observedRunningTime="2026-02-02 00:21:49.722128562 +0000 UTC m=+708.997625502" watchObservedRunningTime="2026-02-02 00:21:49.722254706 +0000 UTC m=+708.997751636" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.766330 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc"] Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.770312 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" podStartSLOduration=2.914821663 podStartE2EDuration="17.770292556s" podCreationTimestamp="2026-02-02 00:21:32 +0000 UTC" firstStartedPulling="2026-02-02 00:21:33.817558874 +0000 UTC m=+693.093055804" lastFinishedPulling="2026-02-02 00:21:48.673029767 +0000 UTC m=+707.948526697" observedRunningTime="2026-02-02 00:21:49.753108144 +0000 UTC m=+709.028605104" watchObservedRunningTime="2026-02-02 00:21:49.770292556 +0000 UTC m=+709.045789506" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.819763 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" podStartSLOduration=2.512353843 podStartE2EDuration="13.819737034s" podCreationTimestamp="2026-02-02 00:21:36 +0000 UTC" firstStartedPulling="2026-02-02 00:21:37.365220334 +0000 UTC m=+696.640717264" lastFinishedPulling="2026-02-02 00:21:48.672603525 +0000 UTC m=+707.948100455" observedRunningTime="2026-02-02 00:21:49.799977311 +0000 UTC m=+709.075474251" watchObservedRunningTime="2026-02-02 00:21:49.819737034 +0000 UTC m=+709.095233974" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.833011 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" podStartSLOduration=3.188206047 podStartE2EDuration="17.832989419s" podCreationTimestamp="2026-02-02 00:21:32 +0000 UTC" firstStartedPulling="2026-02-02 00:21:34.02842921 +0000 UTC m=+693.303926140" lastFinishedPulling="2026-02-02 00:21:48.673212582 +0000 UTC m=+707.948709512" observedRunningTime="2026-02-02 00:21:49.82684826 +0000 UTC m=+709.102345200" watchObservedRunningTime="2026-02-02 00:21:49.832989419 +0000 UTC m=+709.108486349" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.902728 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" podStartSLOduration=2.116127264 podStartE2EDuration="16.902708135s" podCreationTimestamp="2026-02-02 00:21:33 +0000 UTC" firstStartedPulling="2026-02-02 00:21:33.955303179 +0000 UTC m=+693.230800109" lastFinishedPulling="2026-02-02 00:21:48.74188406 +0000 UTC m=+708.017380980" observedRunningTime="2026-02-02 00:21:49.864635959 +0000 UTC m=+709.140132909" watchObservedRunningTime="2026-02-02 00:21:49.902708135 +0000 UTC m=+709.178205075" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.906284 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" podStartSLOduration=2.28108867 podStartE2EDuration="16.906271093s" podCreationTimestamp="2026-02-02 00:21:33 +0000 UTC" firstStartedPulling="2026-02-02 00:21:34.048777199 +0000 UTC m=+693.324274129" lastFinishedPulling="2026-02-02 00:21:48.673959622 +0000 UTC m=+707.949456552" observedRunningTime="2026-02-02 00:21:49.89887155 +0000 UTC m=+709.174368520" watchObservedRunningTime="2026-02-02 00:21:49.906271093 +0000 UTC m=+709.181768033" Feb 02 00:21:50 crc kubenswrapper[5108]: I0202 00:21:50.724114 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" event={"ID":"1820eeba-be2c-4340-843a-2caf82b3b450","Type":"ContainerStarted","Data":"939fba30b22bf5d03ad8928c8d7d94cd666aeece31637800841885ac4dec14fe"} Feb 02 00:21:50 crc kubenswrapper[5108]: I0202 00:21:50.919589 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:21:50 crc kubenswrapper[5108]: I0202 00:21:50.919673 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:21:50 crc kubenswrapper[5108]: I0202 00:21:50.919721 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:21:50 crc kubenswrapper[5108]: I0202 00:21:50.920390 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f2e9df533cb87396f8d3fd0d1a26fadb3bf2cae351b8b03ee4f3bd210e16a31"} pod="openshift-machine-config-operator/machine-config-daemon-d74m7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 00:21:50 crc kubenswrapper[5108]: I0202 00:21:50.920446 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" containerID="cri-o://2f2e9df533cb87396f8d3fd0d1a26fadb3bf2cae351b8b03ee4f3bd210e16a31" gracePeriod=600 Feb 02 00:21:51 crc kubenswrapper[5108]: I0202 00:21:51.741660 5108 generic.go:358] "Generic (PLEG): container finished" podID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerID="2f2e9df533cb87396f8d3fd0d1a26fadb3bf2cae351b8b03ee4f3bd210e16a31" exitCode=0 Feb 02 00:21:51 crc kubenswrapper[5108]: I0202 00:21:51.743482 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerDied","Data":"2f2e9df533cb87396f8d3fd0d1a26fadb3bf2cae351b8b03ee4f3bd210e16a31"} Feb 02 00:21:51 crc kubenswrapper[5108]: I0202 00:21:51.743514 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"795679bf9de717c5d31e446059babc25599991e8481de54f0dc1309c13af937d"} Feb 02 00:21:51 crc kubenswrapper[5108]: I0202 00:21:51.743534 5108 scope.go:117] "RemoveContainer" containerID="0e2568caf741572a83d3d444d4f4d6722d2e6e9a09c71f1dec22c400db69da1e" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.950575 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.958375 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.960586 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.960931 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.961139 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.961487 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.961566 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.965694 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.966138 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-442s6\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.969076 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.977003 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.015455 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034519 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034568 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034588 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034618 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034634 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/91781fe7-72ca-4748-8dcd-5d7d1c275472-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034662 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034684 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034708 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034728 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034770 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034788 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.035767 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.035809 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.035832 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.035846 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.138846 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.138904 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.138929 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.138948 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.138992 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139024 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139040 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139058 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139079 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139096 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139129 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139144 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139160 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139190 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139206 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/91781fe7-72ca-4748-8dcd-5d7d1c275472-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.142620 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.143263 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.143560 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.143813 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.144077 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.144762 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.146971 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.147626 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.147703 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.151816 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.152339 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/91781fe7-72ca-4748-8dcd-5d7d1c275472-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.154474 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.163601 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.171495 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.174355 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.285952 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:54 crc kubenswrapper[5108]: I0202 00:21:54.498744 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 02 00:21:54 crc kubenswrapper[5108]: W0202 00:21:54.512784 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91781fe7_72ca_4748_8dcd_5d7d1c275472.slice/crio-44a372cf31eb90f97681367bc93002df2d4832d7ae8e57cf00b44707d491213e WatchSource:0}: Error finding container 44a372cf31eb90f97681367bc93002df2d4832d7ae8e57cf00b44707d491213e: Status 404 returned error can't find the container with id 44a372cf31eb90f97681367bc93002df2d4832d7ae8e57cf00b44707d491213e Feb 02 00:21:54 crc kubenswrapper[5108]: I0202 00:21:54.769662 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" event={"ID":"1820eeba-be2c-4340-843a-2caf82b3b450","Type":"ContainerStarted","Data":"21402d94ddc844fdfeb341a432b8360de71f168962675a441d21ff47ce0a322c"} Feb 02 00:21:54 crc kubenswrapper[5108]: I0202 00:21:54.771831 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"91781fe7-72ca-4748-8dcd-5d7d1c275472","Type":"ContainerStarted","Data":"44a372cf31eb90f97681367bc93002df2d4832d7ae8e57cf00b44707d491213e"} Feb 02 00:21:54 crc kubenswrapper[5108]: I0202 00:21:54.801461 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" podStartSLOduration=1.242451817 podStartE2EDuration="5.801443925s" podCreationTimestamp="2026-02-02 00:21:49 +0000 UTC" firstStartedPulling="2026-02-02 00:21:49.766382718 +0000 UTC m=+709.041879648" lastFinishedPulling="2026-02-02 00:21:54.325374836 +0000 UTC m=+713.600871756" observedRunningTime="2026-02-02 00:21:54.79450999 +0000 UTC m=+714.070006920" watchObservedRunningTime="2026-02-02 00:21:54.801443925 +0000 UTC m=+714.076940855" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.665127 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-gwlkp"] Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.674037 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.673631 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-gwlkp"] Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.677086 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.677102 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.679733 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-jbvdl\"" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.841480 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c526e59-9f54-4c07-9df7-9c254286c8b2-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-gwlkp\" (UID: \"9c526e59-9f54-4c07-9df7-9c254286c8b2\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.841613 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffm5f\" (UniqueName: \"kubernetes.io/projected/9c526e59-9f54-4c07-9df7-9c254286c8b2-kube-api-access-ffm5f\") pod \"cert-manager-cainjector-8966b78d4-gwlkp\" (UID: \"9c526e59-9f54-4c07-9df7-9c254286c8b2\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.942912 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c526e59-9f54-4c07-9df7-9c254286c8b2-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-gwlkp\" (UID: \"9c526e59-9f54-4c07-9df7-9c254286c8b2\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.942971 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ffm5f\" (UniqueName: \"kubernetes.io/projected/9c526e59-9f54-4c07-9df7-9c254286c8b2-kube-api-access-ffm5f\") pod \"cert-manager-cainjector-8966b78d4-gwlkp\" (UID: \"9c526e59-9f54-4c07-9df7-9c254286c8b2\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.964458 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffm5f\" (UniqueName: \"kubernetes.io/projected/9c526e59-9f54-4c07-9df7-9c254286c8b2-kube-api-access-ffm5f\") pod \"cert-manager-cainjector-8966b78d4-gwlkp\" (UID: \"9c526e59-9f54-4c07-9df7-9c254286c8b2\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.972240 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c526e59-9f54-4c07-9df7-9c254286c8b2-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-gwlkp\" (UID: \"9c526e59-9f54-4c07-9df7-9c254286c8b2\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.993315 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.133779 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499862-nmjl8"] Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.140981 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.145588 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.145837 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.146684 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499862-nmjl8"] Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.147627 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.248272 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzhcg\" (UniqueName: \"kubernetes.io/projected/e35e90a5-9be9-4d25-a87f-80c879fadbdb-kube-api-access-qzhcg\") pod \"auto-csr-approver-29499862-nmjl8\" (UID: \"e35e90a5-9be9-4d25-a87f-80c879fadbdb\") " pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.349995 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qzhcg\" (UniqueName: \"kubernetes.io/projected/e35e90a5-9be9-4d25-a87f-80c879fadbdb-kube-api-access-qzhcg\") pod \"auto-csr-approver-29499862-nmjl8\" (UID: \"e35e90a5-9be9-4d25-a87f-80c879fadbdb\") " pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.375307 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzhcg\" (UniqueName: \"kubernetes.io/projected/e35e90a5-9be9-4d25-a87f-80c879fadbdb-kube-api-access-qzhcg\") pod \"auto-csr-approver-29499862-nmjl8\" (UID: \"e35e90a5-9be9-4d25-a87f-80c879fadbdb\") " pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.443742 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-gwlkp"] Feb 02 00:22:00 crc kubenswrapper[5108]: W0202 00:22:00.453403 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c526e59_9f54_4c07_9df7_9c254286c8b2.slice/crio-46c945a7ea295aefdb1ca3889db4c43a13d88dbf73dd7b5482d899e06884eb26 WatchSource:0}: Error finding container 46c945a7ea295aefdb1ca3889db4c43a13d88dbf73dd7b5482d899e06884eb26: Status 404 returned error can't find the container with id 46c945a7ea295aefdb1ca3889db4c43a13d88dbf73dd7b5482d899e06884eb26 Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.471086 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.678399 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499862-nmjl8"] Feb 02 00:22:00 crc kubenswrapper[5108]: W0202 00:22:00.693711 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode35e90a5_9be9_4d25_a87f_80c879fadbdb.slice/crio-51b2d17ed67e42cdac5b2f5f604b170cbdaecb56ea11e9bb1fcb26e25b4fda70 WatchSource:0}: Error finding container 51b2d17ed67e42cdac5b2f5f604b170cbdaecb56ea11e9bb1fcb26e25b4fda70: Status 404 returned error can't find the container with id 51b2d17ed67e42cdac5b2f5f604b170cbdaecb56ea11e9bb1fcb26e25b4fda70 Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.728673 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.823785 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" event={"ID":"9c526e59-9f54-4c07-9df7-9c254286c8b2","Type":"ContainerStarted","Data":"46c945a7ea295aefdb1ca3889db4c43a13d88dbf73dd7b5482d899e06884eb26"} Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.835433 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" event={"ID":"e35e90a5-9be9-4d25-a87f-80c879fadbdb","Type":"ContainerStarted","Data":"51b2d17ed67e42cdac5b2f5f604b170cbdaecb56ea11e9bb1fcb26e25b4fda70"} Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.838282 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-md5xl"] Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.842251 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.846143 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-ttpr4\"" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.856643 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-md5xl"] Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.962565 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36067e0f-9235-409f-83d9-125165d03451-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-md5xl\" (UID: \"36067e0f-9235-409f-83d9-125165d03451\") " pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.962625 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qxf\" (UniqueName: \"kubernetes.io/projected/36067e0f-9235-409f-83d9-125165d03451-kube-api-access-77qxf\") pod \"cert-manager-webhook-597b96b99b-md5xl\" (UID: \"36067e0f-9235-409f-83d9-125165d03451\") " pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.063936 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36067e0f-9235-409f-83d9-125165d03451-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-md5xl\" (UID: \"36067e0f-9235-409f-83d9-125165d03451\") " pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.064027 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77qxf\" (UniqueName: \"kubernetes.io/projected/36067e0f-9235-409f-83d9-125165d03451-kube-api-access-77qxf\") pod \"cert-manager-webhook-597b96b99b-md5xl\" (UID: \"36067e0f-9235-409f-83d9-125165d03451\") " pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.084978 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77qxf\" (UniqueName: \"kubernetes.io/projected/36067e0f-9235-409f-83d9-125165d03451-kube-api-access-77qxf\") pod \"cert-manager-webhook-597b96b99b-md5xl\" (UID: \"36067e0f-9235-409f-83d9-125165d03451\") " pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.087959 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36067e0f-9235-409f-83d9-125165d03451-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-md5xl\" (UID: \"36067e0f-9235-409f-83d9-125165d03451\") " pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.165167 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.504731 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-md5xl"] Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.873735 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" event={"ID":"36067e0f-9235-409f-83d9-125165d03451","Type":"ContainerStarted","Data":"a4801c99b691d85a15f2704d4ff55b4833a3bad762b25c878f2c36ff5005a2c5"} Feb 02 00:22:02 crc kubenswrapper[5108]: I0202 00:22:02.883939 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" event={"ID":"e35e90a5-9be9-4d25-a87f-80c879fadbdb","Type":"ContainerStarted","Data":"ac142680678000a1c22ed75ac938d78969d68b4d54d50e573d123eec7fdc4975"} Feb 02 00:22:02 crc kubenswrapper[5108]: I0202 00:22:02.961706 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" podStartSLOduration=1.9612024319999999 podStartE2EDuration="2.961686276s" podCreationTimestamp="2026-02-02 00:22:00 +0000 UTC" firstStartedPulling="2026-02-02 00:22:00.696690064 +0000 UTC m=+719.972186994" lastFinishedPulling="2026-02-02 00:22:01.697173908 +0000 UTC m=+720.972670838" observedRunningTime="2026-02-02 00:22:02.957328564 +0000 UTC m=+722.232825494" watchObservedRunningTime="2026-02-02 00:22:02.961686276 +0000 UTC m=+722.237183206" Feb 02 00:22:03 crc kubenswrapper[5108]: I0202 00:22:03.899498 5108 generic.go:358] "Generic (PLEG): container finished" podID="e35e90a5-9be9-4d25-a87f-80c879fadbdb" containerID="ac142680678000a1c22ed75ac938d78969d68b4d54d50e573d123eec7fdc4975" exitCode=0 Feb 02 00:22:03 crc kubenswrapper[5108]: I0202 00:22:03.899697 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" event={"ID":"e35e90a5-9be9-4d25-a87f-80c879fadbdb","Type":"ContainerDied","Data":"ac142680678000a1c22ed75ac938d78969d68b4d54d50e573d123eec7fdc4975"} Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.185232 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.279728 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzhcg\" (UniqueName: \"kubernetes.io/projected/e35e90a5-9be9-4d25-a87f-80c879fadbdb-kube-api-access-qzhcg\") pod \"e35e90a5-9be9-4d25-a87f-80c879fadbdb\" (UID: \"e35e90a5-9be9-4d25-a87f-80c879fadbdb\") " Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.287059 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35e90a5-9be9-4d25-a87f-80c879fadbdb-kube-api-access-qzhcg" (OuterVolumeSpecName: "kube-api-access-qzhcg") pod "e35e90a5-9be9-4d25-a87f-80c879fadbdb" (UID: "e35e90a5-9be9-4d25-a87f-80c879fadbdb"). InnerVolumeSpecName "kube-api-access-qzhcg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.388299 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qzhcg\" (UniqueName: \"kubernetes.io/projected/e35e90a5-9be9-4d25-a87f-80c879fadbdb-kube-api-access-qzhcg\") on node \"crc\" DevicePath \"\"" Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.925375 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" event={"ID":"e35e90a5-9be9-4d25-a87f-80c879fadbdb","Type":"ContainerDied","Data":"51b2d17ed67e42cdac5b2f5f604b170cbdaecb56ea11e9bb1fcb26e25b4fda70"} Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.925436 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51b2d17ed67e42cdac5b2f5f604b170cbdaecb56ea11e9bb1fcb26e25b4fda70" Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.925463 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:06 crc kubenswrapper[5108]: I0202 00:22:06.232902 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29499856-n677f"] Feb 02 00:22:06 crc kubenswrapper[5108]: I0202 00:22:06.236782 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29499856-n677f"] Feb 02 00:22:07 crc kubenswrapper[5108]: I0202 00:22:07.564433 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d68061-8bea-4670-828e-3fd982547198" path="/var/lib/kubelet/pods/b2d68061-8bea-4670-828e-3fd982547198/volumes" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.380870 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.382317 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e35e90a5-9be9-4d25-a87f-80c879fadbdb" containerName="oc" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.382331 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35e90a5-9be9-4d25-a87f-80c879fadbdb" containerName="oc" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.382449 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e35e90a5-9be9-4d25-a87f-80c879fadbdb" containerName="oc" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.430913 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.431068 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.433599 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-catalog-configmap-partition-1\"" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.562900 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.563414 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9zqj\" (UniqueName: \"kubernetes.io/projected/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-kube-api-access-x9zqj\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.563503 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.664610 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.664704 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x9zqj\" (UniqueName: \"kubernetes.io/projected/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-kube-api-access-x9zqj\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.664790 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.665900 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.666655 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.700130 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9zqj\" (UniqueName: \"kubernetes.io/projected/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-kube-api-access-x9zqj\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.752821 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:16 crc kubenswrapper[5108]: I0202 00:22:16.882366 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-z8j4s"] Feb 02 00:22:16 crc kubenswrapper[5108]: I0202 00:22:16.888347 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:16 crc kubenswrapper[5108]: I0202 00:22:16.893124 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-md8ws\"" Feb 02 00:22:16 crc kubenswrapper[5108]: I0202 00:22:16.902946 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-z8j4s"] Feb 02 00:22:16 crc kubenswrapper[5108]: I0202 00:22:16.998205 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f0e17311-6020-462f-9ab7-8db9a5b4fd53-bound-sa-token\") pod \"cert-manager-759f64656b-z8j4s\" (UID: \"f0e17311-6020-462f-9ab7-8db9a5b4fd53\") " pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:16 crc kubenswrapper[5108]: I0202 00:22:16.998269 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljrg6\" (UniqueName: \"kubernetes.io/projected/f0e17311-6020-462f-9ab7-8db9a5b4fd53-kube-api-access-ljrg6\") pod \"cert-manager-759f64656b-z8j4s\" (UID: \"f0e17311-6020-462f-9ab7-8db9a5b4fd53\") " pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:17 crc kubenswrapper[5108]: I0202 00:22:17.099293 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f0e17311-6020-462f-9ab7-8db9a5b4fd53-bound-sa-token\") pod \"cert-manager-759f64656b-z8j4s\" (UID: \"f0e17311-6020-462f-9ab7-8db9a5b4fd53\") " pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:17 crc kubenswrapper[5108]: I0202 00:22:17.099347 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljrg6\" (UniqueName: \"kubernetes.io/projected/f0e17311-6020-462f-9ab7-8db9a5b4fd53-kube-api-access-ljrg6\") pod \"cert-manager-759f64656b-z8j4s\" (UID: \"f0e17311-6020-462f-9ab7-8db9a5b4fd53\") " pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:17 crc kubenswrapper[5108]: I0202 00:22:17.119604 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljrg6\" (UniqueName: \"kubernetes.io/projected/f0e17311-6020-462f-9ab7-8db9a5b4fd53-kube-api-access-ljrg6\") pod \"cert-manager-759f64656b-z8j4s\" (UID: \"f0e17311-6020-462f-9ab7-8db9a5b4fd53\") " pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:17 crc kubenswrapper[5108]: I0202 00:22:17.120079 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f0e17311-6020-462f-9ab7-8db9a5b4fd53-bound-sa-token\") pod \"cert-manager-759f64656b-z8j4s\" (UID: \"f0e17311-6020-462f-9ab7-8db9a5b4fd53\") " pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:17 crc kubenswrapper[5108]: I0202 00:22:17.210057 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:18 crc kubenswrapper[5108]: I0202 00:22:18.211392 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 02 00:22:18 crc kubenswrapper[5108]: W0202 00:22:18.224483 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13d5efa3_18a2_405c_96ec_e5ee2d3014b2.slice/crio-681ae3ca523005d5137b6d4fc907682c1d49bbef69ee91ba664e7e0be6ab1205 WatchSource:0}: Error finding container 681ae3ca523005d5137b6d4fc907682c1d49bbef69ee91ba664e7e0be6ab1205: Status 404 returned error can't find the container with id 681ae3ca523005d5137b6d4fc907682c1d49bbef69ee91ba664e7e0be6ab1205 Feb 02 00:22:18 crc kubenswrapper[5108]: W0202 00:22:18.279921 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0e17311_6020_462f_9ab7_8db9a5b4fd53.slice/crio-c83ef1abfc1b77b3d329f48e1f9a225c26c83c8a137bf662b7929943745b7875 WatchSource:0}: Error finding container c83ef1abfc1b77b3d329f48e1f9a225c26c83c8a137bf662b7929943745b7875: Status 404 returned error can't find the container with id c83ef1abfc1b77b3d329f48e1f9a225c26c83c8a137bf662b7929943745b7875 Feb 02 00:22:18 crc kubenswrapper[5108]: I0202 00:22:18.282787 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-z8j4s"] Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.023431 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"13d5efa3-18a2-405c-96ec-e5ee2d3014b2","Type":"ContainerStarted","Data":"681ae3ca523005d5137b6d4fc907682c1d49bbef69ee91ba664e7e0be6ab1205"} Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.025851 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" event={"ID":"36067e0f-9235-409f-83d9-125165d03451","Type":"ContainerStarted","Data":"910fe1e9d1f303676d781b1bae1205ed9252606668b1b865b8fa1f886424c0d6"} Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.026014 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.028651 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"91781fe7-72ca-4748-8dcd-5d7d1c275472","Type":"ContainerStarted","Data":"7ecd5f58b5fe2f871e7b269b373e5e2fc280e928be4497b883044d2c36a03ab4"} Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.029849 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" event={"ID":"9c526e59-9f54-4c07-9df7-9c254286c8b2","Type":"ContainerStarted","Data":"01bf8af26ce1df79714b9bcae9bdc6cf8187e634cc20810db6409c6eca49c881"} Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.031454 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-z8j4s" event={"ID":"f0e17311-6020-462f-9ab7-8db9a5b4fd53","Type":"ContainerStarted","Data":"efce5e45a33c822592fb6de999f000bfb240d91475033f7cf55d84ecabbbd810"} Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.031480 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-z8j4s" event={"ID":"f0e17311-6020-462f-9ab7-8db9a5b4fd53","Type":"ContainerStarted","Data":"c83ef1abfc1b77b3d329f48e1f9a225c26c83c8a137bf662b7929943745b7875"} Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.068992 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" podStartSLOduration=2.728748533 podStartE2EDuration="19.068976216s" podCreationTimestamp="2026-02-02 00:22:00 +0000 UTC" firstStartedPulling="2026-02-02 00:22:01.687707684 +0000 UTC m=+720.963204614" lastFinishedPulling="2026-02-02 00:22:18.027935347 +0000 UTC m=+737.303432297" observedRunningTime="2026-02-02 00:22:19.065844698 +0000 UTC m=+738.341341638" watchObservedRunningTime="2026-02-02 00:22:19.068976216 +0000 UTC m=+738.344473146" Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.093012 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-z8j4s" podStartSLOduration=3.092989589 podStartE2EDuration="3.092989589s" podCreationTimestamp="2026-02-02 00:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:22:19.087745912 +0000 UTC m=+738.363242842" watchObservedRunningTime="2026-02-02 00:22:19.092989589 +0000 UTC m=+738.368486529" Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.177494 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" podStartSLOduration=2.640117852 podStartE2EDuration="20.177474374s" podCreationTimestamp="2026-02-02 00:21:59 +0000 UTC" firstStartedPulling="2026-02-02 00:22:00.460838861 +0000 UTC m=+719.736335791" lastFinishedPulling="2026-02-02 00:22:17.998195363 +0000 UTC m=+737.273692313" observedRunningTime="2026-02-02 00:22:19.121937199 +0000 UTC m=+738.397434129" watchObservedRunningTime="2026-02-02 00:22:19.177474374 +0000 UTC m=+738.452971304" Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.317789 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.346139 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 02 00:22:21 crc kubenswrapper[5108]: I0202 00:22:21.049791 5108 generic.go:358] "Generic (PLEG): container finished" podID="91781fe7-72ca-4748-8dcd-5d7d1c275472" containerID="7ecd5f58b5fe2f871e7b269b373e5e2fc280e928be4497b883044d2c36a03ab4" exitCode=0 Feb 02 00:22:21 crc kubenswrapper[5108]: I0202 00:22:21.049845 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"91781fe7-72ca-4748-8dcd-5d7d1c275472","Type":"ContainerDied","Data":"7ecd5f58b5fe2f871e7b269b373e5e2fc280e928be4497b883044d2c36a03ab4"} Feb 02 00:22:24 crc kubenswrapper[5108]: I0202 00:22:24.074015 5108 generic.go:358] "Generic (PLEG): container finished" podID="13d5efa3-18a2-405c-96ec-e5ee2d3014b2" containerID="6ccc56e44008c8bdc70fabbd8ac843e8bb5c8f578b2f88a9b867948e4db96b0c" exitCode=0 Feb 02 00:22:24 crc kubenswrapper[5108]: I0202 00:22:24.074113 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"13d5efa3-18a2-405c-96ec-e5ee2d3014b2","Type":"ContainerDied","Data":"6ccc56e44008c8bdc70fabbd8ac843e8bb5c8f578b2f88a9b867948e4db96b0c"} Feb 02 00:22:24 crc kubenswrapper[5108]: I0202 00:22:24.077657 5108 generic.go:358] "Generic (PLEG): container finished" podID="91781fe7-72ca-4748-8dcd-5d7d1c275472" containerID="6300a2dc28e3c4f04ee436a881ab1be37cfdad5111656a4146dacc4c870adee4" exitCode=0 Feb 02 00:22:24 crc kubenswrapper[5108]: I0202 00:22:24.077710 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"91781fe7-72ca-4748-8dcd-5d7d1c275472","Type":"ContainerDied","Data":"6300a2dc28e3c4f04ee436a881ab1be37cfdad5111656a4146dacc4c870adee4"} Feb 02 00:22:25 crc kubenswrapper[5108]: I0202 00:22:25.045676 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:25 crc kubenswrapper[5108]: I0202 00:22:25.093808 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"91781fe7-72ca-4748-8dcd-5d7d1c275472","Type":"ContainerStarted","Data":"d2e711aff88f7d44f468273aa8bf1d2828eb4f109c32cda98d1d9f783d366c4c"} Feb 02 00:22:25 crc kubenswrapper[5108]: I0202 00:22:25.094257 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:22:25 crc kubenswrapper[5108]: I0202 00:22:25.149450 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=9.53675842 podStartE2EDuration="33.149426722s" podCreationTimestamp="2026-02-02 00:21:52 +0000 UTC" firstStartedPulling="2026-02-02 00:21:54.515292646 +0000 UTC m=+713.790789576" lastFinishedPulling="2026-02-02 00:22:18.127960948 +0000 UTC m=+737.403457878" observedRunningTime="2026-02-02 00:22:25.144253627 +0000 UTC m=+744.419750577" watchObservedRunningTime="2026-02-02 00:22:25.149426722 +0000 UTC m=+744.424923652" Feb 02 00:22:28 crc kubenswrapper[5108]: I0202 00:22:28.128728 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"13d5efa3-18a2-405c-96ec-e5ee2d3014b2","Type":"ContainerStarted","Data":"3a950f31bbc63a2355d530a73743f3bf4b083eb86e5f832f78d48525d315daa7"} Feb 02 00:22:28 crc kubenswrapper[5108]: I0202 00:22:28.158480 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" podStartSLOduration=4.853456082 podStartE2EDuration="14.158449126s" podCreationTimestamp="2026-02-02 00:22:14 +0000 UTC" firstStartedPulling="2026-02-02 00:22:18.226863697 +0000 UTC m=+737.502360627" lastFinishedPulling="2026-02-02 00:22:27.531856741 +0000 UTC m=+746.807353671" observedRunningTime="2026-02-02 00:22:28.152673235 +0000 UTC m=+747.428170205" watchObservedRunningTime="2026-02-02 00:22:28.158449126 +0000 UTC m=+747.433946056" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.060796 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x"] Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.069041 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.083215 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x"] Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.200446 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.200594 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.200670 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrpm6\" (UniqueName: \"kubernetes.io/projected/af02ca82-ac58-4944-8da6-d006cf605640-kube-api-access-wrpm6\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.301969 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.302081 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wrpm6\" (UniqueName: \"kubernetes.io/projected/af02ca82-ac58-4944-8da6-d006cf605640-kube-api-access-wrpm6\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.302161 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.302516 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.302715 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.345354 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrpm6\" (UniqueName: \"kubernetes.io/projected/af02ca82-ac58-4944-8da6-d006cf605640-kube-api-access-wrpm6\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.428529 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.889196 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x"] Feb 02 00:22:30 crc kubenswrapper[5108]: W0202 00:22:30.897463 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf02ca82_ac58_4944_8da6_d006cf605640.slice/crio-a62ed66e2f265bdb5d7922f2380879a1183923607577d1cd2dee46ea534d4c42 WatchSource:0}: Error finding container a62ed66e2f265bdb5d7922f2380879a1183923607577d1cd2dee46ea534d4c42: Status 404 returned error can't find the container with id a62ed66e2f265bdb5d7922f2380879a1183923607577d1cd2dee46ea534d4c42 Feb 02 00:22:31 crc kubenswrapper[5108]: I0202 00:22:31.157271 5108 generic.go:358] "Generic (PLEG): container finished" podID="af02ca82-ac58-4944-8da6-d006cf605640" containerID="acb9b8d3b29f8fd43d52f5da5189aa8849e58cc27d2bfa608f35d89115d8f06d" exitCode=0 Feb 02 00:22:31 crc kubenswrapper[5108]: I0202 00:22:31.157396 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" event={"ID":"af02ca82-ac58-4944-8da6-d006cf605640","Type":"ContainerDied","Data":"acb9b8d3b29f8fd43d52f5da5189aa8849e58cc27d2bfa608f35d89115d8f06d"} Feb 02 00:22:31 crc kubenswrapper[5108]: I0202 00:22:31.157857 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" event={"ID":"af02ca82-ac58-4944-8da6-d006cf605640","Type":"ContainerStarted","Data":"a62ed66e2f265bdb5d7922f2380879a1183923607577d1cd2dee46ea534d4c42"} Feb 02 00:22:32 crc kubenswrapper[5108]: I0202 00:22:32.171489 5108 generic.go:358] "Generic (PLEG): container finished" podID="af02ca82-ac58-4944-8da6-d006cf605640" containerID="a466db46a9799efec35e7ce18b379a2896b9217339a99b02488b81f0e5c8affe" exitCode=0 Feb 02 00:22:32 crc kubenswrapper[5108]: I0202 00:22:32.171679 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" event={"ID":"af02ca82-ac58-4944-8da6-d006cf605640","Type":"ContainerDied","Data":"a466db46a9799efec35e7ce18b379a2896b9217339a99b02488b81f0e5c8affe"} Feb 02 00:22:33 crc kubenswrapper[5108]: I0202 00:22:33.185281 5108 generic.go:358] "Generic (PLEG): container finished" podID="af02ca82-ac58-4944-8da6-d006cf605640" containerID="2a5f602da0b8b8e3ac79a7a7ed93ea2a25f5241caad2fd9c08e65a6bf55bcfb8" exitCode=0 Feb 02 00:22:33 crc kubenswrapper[5108]: I0202 00:22:33.185361 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" event={"ID":"af02ca82-ac58-4944-8da6-d006cf605640","Type":"ContainerDied","Data":"2a5f602da0b8b8e3ac79a7a7ed93ea2a25f5241caad2fd9c08e65a6bf55bcfb8"} Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.450734 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.573381 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-util\") pod \"af02ca82-ac58-4944-8da6-d006cf605640\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.573732 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrpm6\" (UniqueName: \"kubernetes.io/projected/af02ca82-ac58-4944-8da6-d006cf605640-kube-api-access-wrpm6\") pod \"af02ca82-ac58-4944-8da6-d006cf605640\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.575056 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-bundle\") pod \"af02ca82-ac58-4944-8da6-d006cf605640\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.576347 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-bundle" (OuterVolumeSpecName: "bundle") pod "af02ca82-ac58-4944-8da6-d006cf605640" (UID: "af02ca82-ac58-4944-8da6-d006cf605640"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.577432 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.588786 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-util" (OuterVolumeSpecName: "util") pod "af02ca82-ac58-4944-8da6-d006cf605640" (UID: "af02ca82-ac58-4944-8da6-d006cf605640"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.599071 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af02ca82-ac58-4944-8da6-d006cf605640-kube-api-access-wrpm6" (OuterVolumeSpecName: "kube-api-access-wrpm6") pod "af02ca82-ac58-4944-8da6-d006cf605640" (UID: "af02ca82-ac58-4944-8da6-d006cf605640"). InnerVolumeSpecName "kube-api-access-wrpm6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.679412 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-util\") on node \"crc\" DevicePath \"\"" Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.679464 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wrpm6\" (UniqueName: \"kubernetes.io/projected/af02ca82-ac58-4944-8da6-d006cf605640-kube-api-access-wrpm6\") on node \"crc\" DevicePath \"\"" Feb 02 00:22:35 crc kubenswrapper[5108]: I0202 00:22:35.204133 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" event={"ID":"af02ca82-ac58-4944-8da6-d006cf605640","Type":"ContainerDied","Data":"a62ed66e2f265bdb5d7922f2380879a1183923607577d1cd2dee46ea534d4c42"} Feb 02 00:22:35 crc kubenswrapper[5108]: I0202 00:22:35.205032 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a62ed66e2f265bdb5d7922f2380879a1183923607577d1cd2dee46ea534d4c42" Feb 02 00:22:35 crc kubenswrapper[5108]: I0202 00:22:35.204445 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:36 crc kubenswrapper[5108]: I0202 00:22:36.252402 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="91781fe7-72ca-4748-8dcd-5d7d1c275472" containerName="elasticsearch" probeResult="failure" output=< Feb 02 00:22:36 crc kubenswrapper[5108]: {"timestamp": "2026-02-02T00:22:36+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 02 00:22:36 crc kubenswrapper[5108]: > Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.689145 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-5f7rf"] Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.689986 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="extract" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.690000 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="extract" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.690023 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="util" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.690028 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="util" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.690038 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="pull" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.690043 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="pull" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.690143 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="extract" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.693673 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.696207 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-bzxlm\"" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.711095 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-5f7rf"] Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.847791 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjqm2\" (UniqueName: \"kubernetes.io/projected/02251320-d565-4211-98ff-a138f7924888-kube-api-access-fjqm2\") pod \"smart-gateway-operator-97b85656c-5f7rf\" (UID: \"02251320-d565-4211-98ff-a138f7924888\") " pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.847870 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/02251320-d565-4211-98ff-a138f7924888-runner\") pod \"smart-gateway-operator-97b85656c-5f7rf\" (UID: \"02251320-d565-4211-98ff-a138f7924888\") " pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.949794 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fjqm2\" (UniqueName: \"kubernetes.io/projected/02251320-d565-4211-98ff-a138f7924888-kube-api-access-fjqm2\") pod \"smart-gateway-operator-97b85656c-5f7rf\" (UID: \"02251320-d565-4211-98ff-a138f7924888\") " pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.949851 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/02251320-d565-4211-98ff-a138f7924888-runner\") pod \"smart-gateway-operator-97b85656c-5f7rf\" (UID: \"02251320-d565-4211-98ff-a138f7924888\") " pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.950371 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/02251320-d565-4211-98ff-a138f7924888-runner\") pod \"smart-gateway-operator-97b85656c-5f7rf\" (UID: \"02251320-d565-4211-98ff-a138f7924888\") " pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.977611 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjqm2\" (UniqueName: \"kubernetes.io/projected/02251320-d565-4211-98ff-a138f7924888-kube-api-access-fjqm2\") pod \"smart-gateway-operator-97b85656c-5f7rf\" (UID: \"02251320-d565-4211-98ff-a138f7924888\") " pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:39 crc kubenswrapper[5108]: I0202 00:22:39.011270 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:39 crc kubenswrapper[5108]: I0202 00:22:39.217021 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-5f7rf"] Feb 02 00:22:39 crc kubenswrapper[5108]: I0202 00:22:39.234857 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" event={"ID":"02251320-d565-4211-98ff-a138f7924888","Type":"ContainerStarted","Data":"011902d18cacf584871509d282aa2108a1bc7261b97dcbef1079572f992ec1a7"} Feb 02 00:22:41 crc kubenswrapper[5108]: I0202 00:22:41.673732 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:22:59 crc kubenswrapper[5108]: I0202 00:22:59.441432 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" event={"ID":"02251320-d565-4211-98ff-a138f7924888","Type":"ContainerStarted","Data":"63a673139938b61ed4a645e70a823e744314ec80ba8934594da501563d78a1b7"} Feb 02 00:22:59 crc kubenswrapper[5108]: I0202 00:22:59.475197 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" podStartSLOduration=2.043332174 podStartE2EDuration="21.475171275s" podCreationTimestamp="2026-02-02 00:22:38 +0000 UTC" firstStartedPulling="2026-02-02 00:22:39.224351498 +0000 UTC m=+758.499848418" lastFinishedPulling="2026-02-02 00:22:58.656190589 +0000 UTC m=+777.931687519" observedRunningTime="2026-02-02 00:22:59.464926515 +0000 UTC m=+778.740423485" watchObservedRunningTime="2026-02-02 00:22:59.475171275 +0000 UTC m=+778.750668245" Feb 02 00:23:02 crc kubenswrapper[5108]: I0202 00:23:02.093151 5108 scope.go:117] "RemoveContainer" containerID="b0d175fd10d4619cf043b11fd6ec6f1927ee4a1ffad44abf1e805ecf0fef43df" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.649274 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.655154 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.657878 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-catalog-configmap-partition-1\"" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.660203 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.720770 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.720829 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.720962 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbrgn\" (UniqueName: \"kubernetes.io/projected/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-kube-api-access-gbrgn\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.822464 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbrgn\" (UniqueName: \"kubernetes.io/projected/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-kube-api-access-gbrgn\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.822544 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.822572 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.823120 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.823453 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.846963 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbrgn\" (UniqueName: \"kubernetes.io/projected/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-kube-api-access-gbrgn\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.974747 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:17 crc kubenswrapper[5108]: I0202 00:23:17.423188 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 02 00:23:17 crc kubenswrapper[5108]: W0202 00:23:17.426934 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod776f0747_5ab3_4ca4_9437_caf3e9c10f6f.slice/crio-f4b12c879aed9ef337bd6d97652ae99b6a6db468c34936882cb5afb78b8cee7c WatchSource:0}: Error finding container f4b12c879aed9ef337bd6d97652ae99b6a6db468c34936882cb5afb78b8cee7c: Status 404 returned error can't find the container with id f4b12c879aed9ef337bd6d97652ae99b6a6db468c34936882cb5afb78b8cee7c Feb 02 00:23:17 crc kubenswrapper[5108]: I0202 00:23:17.564302 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"776f0747-5ab3-4ca4-9437-caf3e9c10f6f","Type":"ContainerStarted","Data":"f4b12c879aed9ef337bd6d97652ae99b6a6db468c34936882cb5afb78b8cee7c"} Feb 02 00:23:18 crc kubenswrapper[5108]: I0202 00:23:18.573331 5108 generic.go:358] "Generic (PLEG): container finished" podID="776f0747-5ab3-4ca4-9437-caf3e9c10f6f" containerID="c7fbe5a6b7bb919b31b754e9af1147639d57ec3eb42ef023dd94a95b29b16577" exitCode=0 Feb 02 00:23:18 crc kubenswrapper[5108]: I0202 00:23:18.573448 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"776f0747-5ab3-4ca4-9437-caf3e9c10f6f","Type":"ContainerDied","Data":"c7fbe5a6b7bb919b31b754e9af1147639d57ec3eb42ef023dd94a95b29b16577"} Feb 02 00:23:20 crc kubenswrapper[5108]: I0202 00:23:20.587435 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"776f0747-5ab3-4ca4-9437-caf3e9c10f6f","Type":"ContainerStarted","Data":"f2120de6853b36bc7f4be377d57c0c2c549989781901e60e60b1fb9ea44b829b"} Feb 02 00:23:20 crc kubenswrapper[5108]: I0202 00:23:20.606419 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" podStartSLOduration=3.015730394 podStartE2EDuration="4.60639798s" podCreationTimestamp="2026-02-02 00:23:16 +0000 UTC" firstStartedPulling="2026-02-02 00:23:18.57447153 +0000 UTC m=+797.849968460" lastFinishedPulling="2026-02-02 00:23:20.165139086 +0000 UTC m=+799.440636046" observedRunningTime="2026-02-02 00:23:20.603849686 +0000 UTC m=+799.879346626" watchObservedRunningTime="2026-02-02 00:23:20.60639798 +0000 UTC m=+799.881894910" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.270271 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9"] Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.278399 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.280925 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9"] Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.283740 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.427829 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.428196 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.428615 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brm56\" (UniqueName: \"kubernetes.io/projected/09f8289b-76c1-4e9d-9878-88f41e0289df-kube-api-access-brm56\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.529563 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-brm56\" (UniqueName: \"kubernetes.io/projected/09f8289b-76c1-4e9d-9878-88f41e0289df-kube-api-access-brm56\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.529970 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.530112 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.530752 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.531445 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.550218 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-brm56\" (UniqueName: \"kubernetes.io/projected/09f8289b-76c1-4e9d-9878-88f41e0289df-kube-api-access-brm56\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.609434 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.028107 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9"] Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.047642 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt"] Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.117311 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt"] Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.117441 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.240582 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.240794 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhqhr\" (UniqueName: \"kubernetes.io/projected/0b9c2624-6584-48ce-9b40-5f866de6d896-kube-api-access-rhqhr\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.240907 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.342528 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rhqhr\" (UniqueName: \"kubernetes.io/projected/0b9c2624-6584-48ce-9b40-5f866de6d896-kube-api-access-rhqhr\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.342626 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.342655 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.343462 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.344311 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.371950 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhqhr\" (UniqueName: \"kubernetes.io/projected/0b9c2624-6584-48ce-9b40-5f866de6d896-kube-api-access-rhqhr\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.435624 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.617985 5108 generic.go:358] "Generic (PLEG): container finished" podID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerID="708c0c30cb2dbe9d2b8f4e0cd80d2d367038e08c79f36cdc11388a5b843dd106" exitCode=0 Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.618460 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" event={"ID":"09f8289b-76c1-4e9d-9878-88f41e0289df","Type":"ContainerDied","Data":"708c0c30cb2dbe9d2b8f4e0cd80d2d367038e08c79f36cdc11388a5b843dd106"} Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.618506 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" event={"ID":"09f8289b-76c1-4e9d-9878-88f41e0289df","Type":"ContainerStarted","Data":"2385e8aeff0016640a9fc886b1e2186ae6b1902e8fbc72c5da6b73b443156b01"} Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.641354 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt"] Feb 02 00:23:25 crc kubenswrapper[5108]: I0202 00:23:25.642018 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" event={"ID":"0b9c2624-6584-48ce-9b40-5f866de6d896","Type":"ContainerStarted","Data":"f0393a7f43db4500070cc032f904361e4e7af3460a98b1a300595e380d5b31c7"} Feb 02 00:23:25 crc kubenswrapper[5108]: I0202 00:23:25.642381 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" event={"ID":"0b9c2624-6584-48ce-9b40-5f866de6d896","Type":"ContainerStarted","Data":"68e90b91198c229e0af2107143436301ec4b686e48ec5d15fb81ef4ed2103fbe"} Feb 02 00:23:26 crc kubenswrapper[5108]: I0202 00:23:26.650713 5108 generic.go:358] "Generic (PLEG): container finished" podID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerID="7bcfad5d1f49488310c1f60a8b396e30f0d8ccd3d43caab6215d2fbdcbc9ee34" exitCode=0 Feb 02 00:23:26 crc kubenswrapper[5108]: I0202 00:23:26.650898 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" event={"ID":"09f8289b-76c1-4e9d-9878-88f41e0289df","Type":"ContainerDied","Data":"7bcfad5d1f49488310c1f60a8b396e30f0d8ccd3d43caab6215d2fbdcbc9ee34"} Feb 02 00:23:26 crc kubenswrapper[5108]: I0202 00:23:26.653193 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerID="f0393a7f43db4500070cc032f904361e4e7af3460a98b1a300595e380d5b31c7" exitCode=0 Feb 02 00:23:26 crc kubenswrapper[5108]: I0202 00:23:26.653460 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" event={"ID":"0b9c2624-6584-48ce-9b40-5f866de6d896","Type":"ContainerDied","Data":"f0393a7f43db4500070cc032f904361e4e7af3460a98b1a300595e380d5b31c7"} Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.004098 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jmpmn"] Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.008800 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.021337 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmpmn"] Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.180913 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dtjn\" (UniqueName: \"kubernetes.io/projected/3421ef38-8f4b-4f32-9305-3aa037a2f474-kube-api-access-6dtjn\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.180989 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-catalog-content\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.181034 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-utilities\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.282780 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6dtjn\" (UniqueName: \"kubernetes.io/projected/3421ef38-8f4b-4f32-9305-3aa037a2f474-kube-api-access-6dtjn\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.282837 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-catalog-content\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.282871 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-utilities\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.283427 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-utilities\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.283530 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-catalog-content\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.305925 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dtjn\" (UniqueName: \"kubernetes.io/projected/3421ef38-8f4b-4f32-9305-3aa037a2f474-kube-api-access-6dtjn\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.329686 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.548399 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmpmn"] Feb 02 00:23:27 crc kubenswrapper[5108]: W0202 00:23:27.626306 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3421ef38_8f4b_4f32_9305_3aa037a2f474.slice/crio-cc80ca44c8d7d85c9c58e8d7c8d39e4969cb73287bbb4ba43d998c06499e673e WatchSource:0}: Error finding container cc80ca44c8d7d85c9c58e8d7c8d39e4969cb73287bbb4ba43d998c06499e673e: Status 404 returned error can't find the container with id cc80ca44c8d7d85c9c58e8d7c8d39e4969cb73287bbb4ba43d998c06499e673e Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.673540 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerStarted","Data":"cc80ca44c8d7d85c9c58e8d7c8d39e4969cb73287bbb4ba43d998c06499e673e"} Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.677619 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerID="d93d2cacedcb9b871190b5561ecd48cbb9031d9229506a12beb76097c34e221f" exitCode=0 Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.677859 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" event={"ID":"0b9c2624-6584-48ce-9b40-5f866de6d896","Type":"ContainerDied","Data":"d93d2cacedcb9b871190b5561ecd48cbb9031d9229506a12beb76097c34e221f"} Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.693780 5108 generic.go:358] "Generic (PLEG): container finished" podID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerID="350095329effad337d6dbbcaa6e9126971ccc0224cbb1c43dcc6d9550d2960a7" exitCode=0 Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.694035 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" event={"ID":"09f8289b-76c1-4e9d-9878-88f41e0289df","Type":"ContainerDied","Data":"350095329effad337d6dbbcaa6e9126971ccc0224cbb1c43dcc6d9550d2960a7"} Feb 02 00:23:28 crc kubenswrapper[5108]: I0202 00:23:28.703770 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerID="882c9c001cce1aef356d3d8973567eeaf86bf006c81ad45d70ef7856832e09cb" exitCode=0 Feb 02 00:23:28 crc kubenswrapper[5108]: I0202 00:23:28.703810 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" event={"ID":"0b9c2624-6584-48ce-9b40-5f866de6d896","Type":"ContainerDied","Data":"882c9c001cce1aef356d3d8973567eeaf86bf006c81ad45d70ef7856832e09cb"} Feb 02 00:23:28 crc kubenswrapper[5108]: I0202 00:23:28.705303 5108 generic.go:358] "Generic (PLEG): container finished" podID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerID="cd91e900875a1d2348a55c6e5c86785cf399e66b88e106b4dd590563e0ece655" exitCode=0 Feb 02 00:23:28 crc kubenswrapper[5108]: I0202 00:23:28.705631 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerDied","Data":"cd91e900875a1d2348a55c6e5c86785cf399e66b88e106b4dd590563e0ece655"} Feb 02 00:23:28 crc kubenswrapper[5108]: I0202 00:23:28.956171 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.107805 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-bundle\") pod \"09f8289b-76c1-4e9d-9878-88f41e0289df\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.107934 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brm56\" (UniqueName: \"kubernetes.io/projected/09f8289b-76c1-4e9d-9878-88f41e0289df-kube-api-access-brm56\") pod \"09f8289b-76c1-4e9d-9878-88f41e0289df\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.107976 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-util\") pod \"09f8289b-76c1-4e9d-9878-88f41e0289df\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.109110 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-bundle" (OuterVolumeSpecName: "bundle") pod "09f8289b-76c1-4e9d-9878-88f41e0289df" (UID: "09f8289b-76c1-4e9d-9878-88f41e0289df"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.117790 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f8289b-76c1-4e9d-9878-88f41e0289df-kube-api-access-brm56" (OuterVolumeSpecName: "kube-api-access-brm56") pod "09f8289b-76c1-4e9d-9878-88f41e0289df" (UID: "09f8289b-76c1-4e9d-9878-88f41e0289df"). InnerVolumeSpecName "kube-api-access-brm56". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.120392 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-util" (OuterVolumeSpecName: "util") pod "09f8289b-76c1-4e9d-9878-88f41e0289df" (UID: "09f8289b-76c1-4e9d-9878-88f41e0289df"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.210006 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-util\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.210057 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.210075 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-brm56\" (UniqueName: \"kubernetes.io/projected/09f8289b-76c1-4e9d-9878-88f41e0289df-kube-api-access-brm56\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.716208 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerStarted","Data":"6bdd2026306d17209ef054fa2900fb6f5744892f6addca0b14a3d700e1cd1394"} Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.719677 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.720176 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" event={"ID":"09f8289b-76c1-4e9d-9878-88f41e0289df","Type":"ContainerDied","Data":"2385e8aeff0016640a9fc886b1e2186ae6b1902e8fbc72c5da6b73b443156b01"} Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.720208 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2385e8aeff0016640a9fc886b1e2186ae6b1902e8fbc72c5da6b73b443156b01" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.926912 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.020976 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhqhr\" (UniqueName: \"kubernetes.io/projected/0b9c2624-6584-48ce-9b40-5f866de6d896-kube-api-access-rhqhr\") pod \"0b9c2624-6584-48ce-9b40-5f866de6d896\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.021102 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-util\") pod \"0b9c2624-6584-48ce-9b40-5f866de6d896\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.021123 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-bundle\") pod \"0b9c2624-6584-48ce-9b40-5f866de6d896\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.021902 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-bundle" (OuterVolumeSpecName: "bundle") pod "0b9c2624-6584-48ce-9b40-5f866de6d896" (UID: "0b9c2624-6584-48ce-9b40-5f866de6d896"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.034591 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-util" (OuterVolumeSpecName: "util") pod "0b9c2624-6584-48ce-9b40-5f866de6d896" (UID: "0b9c2624-6584-48ce-9b40-5f866de6d896"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.040208 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b9c2624-6584-48ce-9b40-5f866de6d896-kube-api-access-rhqhr" (OuterVolumeSpecName: "kube-api-access-rhqhr") pod "0b9c2624-6584-48ce-9b40-5f866de6d896" (UID: "0b9c2624-6584-48ce-9b40-5f866de6d896"). InnerVolumeSpecName "kube-api-access-rhqhr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.122728 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-util\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.122756 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.122765 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rhqhr\" (UniqueName: \"kubernetes.io/projected/0b9c2624-6584-48ce-9b40-5f866de6d896-kube-api-access-rhqhr\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.729123 5108 generic.go:358] "Generic (PLEG): container finished" podID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerID="6bdd2026306d17209ef054fa2900fb6f5744892f6addca0b14a3d700e1cd1394" exitCode=0 Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.729211 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerDied","Data":"6bdd2026306d17209ef054fa2900fb6f5744892f6addca0b14a3d700e1cd1394"} Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.737125 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" event={"ID":"0b9c2624-6584-48ce-9b40-5f866de6d896","Type":"ContainerDied","Data":"68e90b91198c229e0af2107143436301ec4b686e48ec5d15fb81ef4ed2103fbe"} Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.737167 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68e90b91198c229e0af2107143436301ec4b686e48ec5d15fb81ef4ed2103fbe" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.737277 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:31 crc kubenswrapper[5108]: I0202 00:23:31.744640 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerStarted","Data":"7fc706bcab6af73d9ba0a9a7620b155fe61d7986b2d16ec5c61188720ace2398"} Feb 02 00:23:31 crc kubenswrapper[5108]: I0202 00:23:31.765204 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jmpmn" podStartSLOduration=5.113270032 podStartE2EDuration="5.76518662s" podCreationTimestamp="2026-02-02 00:23:26 +0000 UTC" firstStartedPulling="2026-02-02 00:23:28.706142232 +0000 UTC m=+807.981639152" lastFinishedPulling="2026-02-02 00:23:29.35805881 +0000 UTC m=+808.633555740" observedRunningTime="2026-02-02 00:23:31.76283532 +0000 UTC m=+811.038332270" watchObservedRunningTime="2026-02-02 00:23:31.76518662 +0000 UTC m=+811.040683550" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.330904 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.331302 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.393165 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.845105 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.952670 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-7r9xw"] Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953525 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="pull" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953547 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="pull" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953564 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="extract" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953571 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="extract" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953591 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="pull" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953598 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="pull" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953609 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="extract" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953614 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="extract" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953626 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="util" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953632 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="util" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953647 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="util" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953653 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="util" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953794 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="extract" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953817 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="extract" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.963247 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.965682 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-p4gtg\"" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.968465 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-7r9xw"] Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.982630 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdlj9\" (UniqueName: \"kubernetes.io/projected/3ea9b720-173a-450f-8359-555796dc329f-kube-api-access-rdlj9\") pod \"interconnect-operator-78b9bd8798-7r9xw\" (UID: \"3ea9b720-173a-450f-8359-555796dc329f\") " pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" Feb 02 00:23:38 crc kubenswrapper[5108]: I0202 00:23:38.084110 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rdlj9\" (UniqueName: \"kubernetes.io/projected/3ea9b720-173a-450f-8359-555796dc329f-kube-api-access-rdlj9\") pod \"interconnect-operator-78b9bd8798-7r9xw\" (UID: \"3ea9b720-173a-450f-8359-555796dc329f\") " pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" Feb 02 00:23:38 crc kubenswrapper[5108]: I0202 00:23:38.113215 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdlj9\" (UniqueName: \"kubernetes.io/projected/3ea9b720-173a-450f-8359-555796dc329f-kube-api-access-rdlj9\") pod \"interconnect-operator-78b9bd8798-7r9xw\" (UID: \"3ea9b720-173a-450f-8359-555796dc329f\") " pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" Feb 02 00:23:38 crc kubenswrapper[5108]: I0202 00:23:38.276901 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" Feb 02 00:23:38 crc kubenswrapper[5108]: I0202 00:23:38.514074 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-7r9xw"] Feb 02 00:23:38 crc kubenswrapper[5108]: I0202 00:23:38.798288 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" event={"ID":"3ea9b720-173a-450f-8359-555796dc329f","Type":"ContainerStarted","Data":"8eddc6b7ff54eb81c4e93a6993467b1be7c9ba29f7194b10d71d6125d75d691d"} Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.281680 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-6gtwj"] Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.527709 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-6gtwj"] Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.527845 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.530104 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-fkjnl\"" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.601868 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmpmn"] Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.602154 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jmpmn" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="registry-server" containerID="cri-o://7fc706bcab6af73d9ba0a9a7620b155fe61d7986b2d16ec5c61188720ace2398" gracePeriod=2 Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.627725 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82z6r\" (UniqueName: \"kubernetes.io/projected/1c4a2dde-667e-45e3-8d53-9219bcfd2214-kube-api-access-82z6r\") pod \"service-telemetry-operator-794b5697c7-6gtwj\" (UID: \"1c4a2dde-667e-45e3-8d53-9219bcfd2214\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.627855 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/1c4a2dde-667e-45e3-8d53-9219bcfd2214-runner\") pod \"service-telemetry-operator-794b5697c7-6gtwj\" (UID: \"1c4a2dde-667e-45e3-8d53-9219bcfd2214\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.729465 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/1c4a2dde-667e-45e3-8d53-9219bcfd2214-runner\") pod \"service-telemetry-operator-794b5697c7-6gtwj\" (UID: \"1c4a2dde-667e-45e3-8d53-9219bcfd2214\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.729906 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-82z6r\" (UniqueName: \"kubernetes.io/projected/1c4a2dde-667e-45e3-8d53-9219bcfd2214-kube-api-access-82z6r\") pod \"service-telemetry-operator-794b5697c7-6gtwj\" (UID: \"1c4a2dde-667e-45e3-8d53-9219bcfd2214\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.730007 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/1c4a2dde-667e-45e3-8d53-9219bcfd2214-runner\") pod \"service-telemetry-operator-794b5697c7-6gtwj\" (UID: \"1c4a2dde-667e-45e3-8d53-9219bcfd2214\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.760818 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-82z6r\" (UniqueName: \"kubernetes.io/projected/1c4a2dde-667e-45e3-8d53-9219bcfd2214-kube-api-access-82z6r\") pod \"service-telemetry-operator-794b5697c7-6gtwj\" (UID: \"1c4a2dde-667e-45e3-8d53-9219bcfd2214\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.823654 5108 generic.go:358] "Generic (PLEG): container finished" podID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerID="7fc706bcab6af73d9ba0a9a7620b155fe61d7986b2d16ec5c61188720ace2398" exitCode=0 Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.824078 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerDied","Data":"7fc706bcab6af73d9ba0a9a7620b155fe61d7986b2d16ec5c61188720ace2398"} Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.852187 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.970114 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.034970 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-catalog-content\") pod \"3421ef38-8f4b-4f32-9305-3aa037a2f474\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.035284 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-utilities\") pod \"3421ef38-8f4b-4f32-9305-3aa037a2f474\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.035477 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dtjn\" (UniqueName: \"kubernetes.io/projected/3421ef38-8f4b-4f32-9305-3aa037a2f474-kube-api-access-6dtjn\") pod \"3421ef38-8f4b-4f32-9305-3aa037a2f474\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.036558 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-utilities" (OuterVolumeSpecName: "utilities") pod "3421ef38-8f4b-4f32-9305-3aa037a2f474" (UID: "3421ef38-8f4b-4f32-9305-3aa037a2f474"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.041693 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3421ef38-8f4b-4f32-9305-3aa037a2f474-kube-api-access-6dtjn" (OuterVolumeSpecName: "kube-api-access-6dtjn") pod "3421ef38-8f4b-4f32-9305-3aa037a2f474" (UID: "3421ef38-8f4b-4f32-9305-3aa037a2f474"). InnerVolumeSpecName "kube-api-access-6dtjn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.118175 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-6gtwj"] Feb 02 00:23:41 crc kubenswrapper[5108]: W0202 00:23:41.130243 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c4a2dde_667e_45e3_8d53_9219bcfd2214.slice/crio-7a87d5bfbc3cab366004772d683a406553429a92615bb049e70a4c42f429cfdd WatchSource:0}: Error finding container 7a87d5bfbc3cab366004772d683a406553429a92615bb049e70a4c42f429cfdd: Status 404 returned error can't find the container with id 7a87d5bfbc3cab366004772d683a406553429a92615bb049e70a4c42f429cfdd Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.143269 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.143298 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dtjn\" (UniqueName: \"kubernetes.io/projected/3421ef38-8f4b-4f32-9305-3aa037a2f474-kube-api-access-6dtjn\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.156247 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3421ef38-8f4b-4f32-9305-3aa037a2f474" (UID: "3421ef38-8f4b-4f32-9305-3aa037a2f474"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.244992 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.835515 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" event={"ID":"1c4a2dde-667e-45e3-8d53-9219bcfd2214","Type":"ContainerStarted","Data":"7a87d5bfbc3cab366004772d683a406553429a92615bb049e70a4c42f429cfdd"} Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.843507 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerDied","Data":"cc80ca44c8d7d85c9c58e8d7c8d39e4969cb73287bbb4ba43d998c06499e673e"} Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.843572 5108 scope.go:117] "RemoveContainer" containerID="7fc706bcab6af73d9ba0a9a7620b155fe61d7986b2d16ec5c61188720ace2398" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.843765 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.874514 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmpmn"] Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.882481 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jmpmn"] Feb 02 00:23:43 crc kubenswrapper[5108]: I0202 00:23:43.570111 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" path="/var/lib/kubelet/pods/3421ef38-8f4b-4f32-9305-3aa037a2f474/volumes" Feb 02 00:23:45 crc kubenswrapper[5108]: I0202 00:23:45.665097 5108 scope.go:117] "RemoveContainer" containerID="6bdd2026306d17209ef054fa2900fb6f5744892f6addca0b14a3d700e1cd1394" Feb 02 00:23:45 crc kubenswrapper[5108]: I0202 00:23:45.724545 5108 scope.go:117] "RemoveContainer" containerID="cd91e900875a1d2348a55c6e5c86785cf399e66b88e106b4dd590563e0ece655" Feb 02 00:23:46 crc kubenswrapper[5108]: I0202 00:23:46.929682 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" event={"ID":"3ea9b720-173a-450f-8359-555796dc329f","Type":"ContainerStarted","Data":"2cc4e47a14e9d721551cf58d75a2e19d77b1eea60175f8eb66445f0ecc31f982"} Feb 02 00:23:46 crc kubenswrapper[5108]: I0202 00:23:46.955221 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" podStartSLOduration=2.697221897 podStartE2EDuration="9.955203375s" podCreationTimestamp="2026-02-02 00:23:37 +0000 UTC" firstStartedPulling="2026-02-02 00:23:38.516718208 +0000 UTC m=+817.792215138" lastFinishedPulling="2026-02-02 00:23:45.774699676 +0000 UTC m=+825.050196616" observedRunningTime="2026-02-02 00:23:46.952753463 +0000 UTC m=+826.228250473" watchObservedRunningTime="2026-02-02 00:23:46.955203375 +0000 UTC m=+826.230700305" Feb 02 00:23:53 crc kubenswrapper[5108]: I0202 00:23:53.003892 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" event={"ID":"1c4a2dde-667e-45e3-8d53-9219bcfd2214","Type":"ContainerStarted","Data":"1bf324fbd7d3f1961d09f7bce6af69dc46d35ed66152ea01ff2b756d0862b6e9"} Feb 02 00:23:53 crc kubenswrapper[5108]: I0202 00:23:53.031872 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" podStartSLOduration=1.5056129870000001 podStartE2EDuration="13.031846651s" podCreationTimestamp="2026-02-02 00:23:40 +0000 UTC" firstStartedPulling="2026-02-02 00:23:41.134113592 +0000 UTC m=+820.409610522" lastFinishedPulling="2026-02-02 00:23:52.660347256 +0000 UTC m=+831.935844186" observedRunningTime="2026-02-02 00:23:53.023666703 +0000 UTC m=+832.299163623" watchObservedRunningTime="2026-02-02 00:23:53.031846651 +0000 UTC m=+832.307343621" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.137784 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499864-pnc7n"] Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139256 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="registry-server" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139291 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="registry-server" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139313 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="extract-content" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139328 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="extract-content" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139406 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="extract-utilities" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139419 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="extract-utilities" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139592 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="registry-server" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.147200 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499864-pnc7n"] Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.147365 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.149754 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.149904 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.151739 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.255745 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9gr\" (UniqueName: \"kubernetes.io/projected/085299b1-a0db-40df-ab74-d8bf934d61bc-kube-api-access-zf9gr\") pod \"auto-csr-approver-29499864-pnc7n\" (UID: \"085299b1-a0db-40df-ab74-d8bf934d61bc\") " pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.357177 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9gr\" (UniqueName: \"kubernetes.io/projected/085299b1-a0db-40df-ab74-d8bf934d61bc-kube-api-access-zf9gr\") pod \"auto-csr-approver-29499864-pnc7n\" (UID: \"085299b1-a0db-40df-ab74-d8bf934d61bc\") " pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.376253 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9gr\" (UniqueName: \"kubernetes.io/projected/085299b1-a0db-40df-ab74-d8bf934d61bc-kube-api-access-zf9gr\") pod \"auto-csr-approver-29499864-pnc7n\" (UID: \"085299b1-a0db-40df-ab74-d8bf934d61bc\") " pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.466758 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.713401 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499864-pnc7n"] Feb 02 00:24:01 crc kubenswrapper[5108]: I0202 00:24:01.067241 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" event={"ID":"085299b1-a0db-40df-ab74-d8bf934d61bc","Type":"ContainerStarted","Data":"a01de391b5cd6a122a36f19cff054fa668a0bc7266f343b71c5faa6068ff2623"} Feb 02 00:24:02 crc kubenswrapper[5108]: I0202 00:24:02.078033 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" event={"ID":"085299b1-a0db-40df-ab74-d8bf934d61bc","Type":"ContainerStarted","Data":"998e5f1fcc87712044852b3976957ba53e7f51bedc7d5c688980e4b72248f874"} Feb 02 00:24:02 crc kubenswrapper[5108]: I0202 00:24:02.093990 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" podStartSLOduration=1.2239795980000001 podStartE2EDuration="2.093966789s" podCreationTimestamp="2026-02-02 00:24:00 +0000 UTC" firstStartedPulling="2026-02-02 00:24:00.737639409 +0000 UTC m=+840.013136369" lastFinishedPulling="2026-02-02 00:24:01.6076266 +0000 UTC m=+840.883123560" observedRunningTime="2026-02-02 00:24:02.092748619 +0000 UTC m=+841.368245559" watchObservedRunningTime="2026-02-02 00:24:02.093966789 +0000 UTC m=+841.369463729" Feb 02 00:24:03 crc kubenswrapper[5108]: I0202 00:24:03.104045 5108 generic.go:358] "Generic (PLEG): container finished" podID="085299b1-a0db-40df-ab74-d8bf934d61bc" containerID="998e5f1fcc87712044852b3976957ba53e7f51bedc7d5c688980e4b72248f874" exitCode=0 Feb 02 00:24:03 crc kubenswrapper[5108]: I0202 00:24:03.104178 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" event={"ID":"085299b1-a0db-40df-ab74-d8bf934d61bc","Type":"ContainerDied","Data":"998e5f1fcc87712044852b3976957ba53e7f51bedc7d5c688980e4b72248f874"} Feb 02 00:24:04 crc kubenswrapper[5108]: I0202 00:24:04.437112 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:04 crc kubenswrapper[5108]: I0202 00:24:04.521357 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf9gr\" (UniqueName: \"kubernetes.io/projected/085299b1-a0db-40df-ab74-d8bf934d61bc-kube-api-access-zf9gr\") pod \"085299b1-a0db-40df-ab74-d8bf934d61bc\" (UID: \"085299b1-a0db-40df-ab74-d8bf934d61bc\") " Feb 02 00:24:04 crc kubenswrapper[5108]: I0202 00:24:04.535439 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085299b1-a0db-40df-ab74-d8bf934d61bc-kube-api-access-zf9gr" (OuterVolumeSpecName: "kube-api-access-zf9gr") pod "085299b1-a0db-40df-ab74-d8bf934d61bc" (UID: "085299b1-a0db-40df-ab74-d8bf934d61bc"). InnerVolumeSpecName "kube-api-access-zf9gr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:24:04 crc kubenswrapper[5108]: I0202 00:24:04.622913 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zf9gr\" (UniqueName: \"kubernetes.io/projected/085299b1-a0db-40df-ab74-d8bf934d61bc-kube-api-access-zf9gr\") on node \"crc\" DevicePath \"\"" Feb 02 00:24:04 crc kubenswrapper[5108]: I0202 00:24:04.670127 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29499858-dzzxv"] Feb 02 00:24:04 crc kubenswrapper[5108]: I0202 00:24:04.678870 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29499858-dzzxv"] Feb 02 00:24:05 crc kubenswrapper[5108]: I0202 00:24:05.122107 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" event={"ID":"085299b1-a0db-40df-ab74-d8bf934d61bc","Type":"ContainerDied","Data":"a01de391b5cd6a122a36f19cff054fa668a0bc7266f343b71c5faa6068ff2623"} Feb 02 00:24:05 crc kubenswrapper[5108]: I0202 00:24:05.122171 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a01de391b5cd6a122a36f19cff054fa668a0bc7266f343b71c5faa6068ff2623" Feb 02 00:24:05 crc kubenswrapper[5108]: I0202 00:24:05.122184 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:05 crc kubenswrapper[5108]: I0202 00:24:05.565407 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="431bfb08-11a6-4c66-893c-650ea32d97b3" path="/var/lib/kubelet/pods/431bfb08-11a6-4c66-893c-650ea32d97b3/volumes" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.672078 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xsgkr"] Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.673769 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="085299b1-a0db-40df-ab74-d8bf934d61bc" containerName="oc" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.673789 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="085299b1-a0db-40df-ab74-d8bf934d61bc" containerName="oc" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.673986 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="085299b1-a0db-40df-ab74-d8bf934d61bc" containerName="oc" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.678927 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.682890 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.683301 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.683537 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.686778 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.687105 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.687337 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-mxfv9\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.687520 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.695390 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xsgkr"] Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751456 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b88kl\" (UniqueName: \"kubernetes.io/projected/22703395-ebd0-469b-aec4-b703ed4a8e65-kube-api-access-b88kl\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751535 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751568 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751601 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-config\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751630 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751676 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-users\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751698 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853066 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b88kl\" (UniqueName: \"kubernetes.io/projected/22703395-ebd0-469b-aec4-b703ed4a8e65-kube-api-access-b88kl\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853594 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853627 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853665 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-config\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853701 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853760 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-users\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853783 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.854861 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-config\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.863032 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.863049 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.863030 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.863250 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-users\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.872031 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b88kl\" (UniqueName: \"kubernetes.io/projected/22703395-ebd0-469b-aec4-b703ed4a8e65-kube-api-access-b88kl\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.875511 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:14 crc kubenswrapper[5108]: I0202 00:24:14.001173 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:14 crc kubenswrapper[5108]: I0202 00:24:14.211999 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xsgkr"] Feb 02 00:24:15 crc kubenswrapper[5108]: I0202 00:24:15.199379 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" event={"ID":"22703395-ebd0-469b-aec4-b703ed4a8e65","Type":"ContainerStarted","Data":"be460dd189cbfc5a2a37f3ba1e3bf4c61862c2876dd659904fe0292f2bbf5517"} Feb 02 00:24:20 crc kubenswrapper[5108]: I0202 00:24:20.236307 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" event={"ID":"22703395-ebd0-469b-aec4-b703ed4a8e65","Type":"ContainerStarted","Data":"6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1"} Feb 02 00:24:20 crc kubenswrapper[5108]: I0202 00:24:20.275473 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" podStartSLOduration=2.4035567970000002 podStartE2EDuration="7.275432395s" podCreationTimestamp="2026-02-02 00:24:13 +0000 UTC" firstStartedPulling="2026-02-02 00:24:14.222457188 +0000 UTC m=+853.497954128" lastFinishedPulling="2026-02-02 00:24:19.094332796 +0000 UTC m=+858.369829726" observedRunningTime="2026-02-02 00:24:20.269796751 +0000 UTC m=+859.545293691" watchObservedRunningTime="2026-02-02 00:24:20.275432395 +0000 UTC m=+859.550929365" Feb 02 00:24:20 crc kubenswrapper[5108]: I0202 00:24:20.920114 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:24:20 crc kubenswrapper[5108]: I0202 00:24:20.920200 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.128438 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.607397 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.608573 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.612637 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.612874 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-9578k\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.613013 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.613272 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.613358 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.613431 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.613717 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.613756 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.614681 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.615795 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718340 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3180ec82-70eb-4837-9eed-a92e41e5e3fc-config-out\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718391 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-web-config\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718423 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718464 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718642 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brkjv\" (UniqueName: \"kubernetes.io/projected/3180ec82-70eb-4837-9eed-a92e41e5e3fc-kube-api-access-brkjv\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718748 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718794 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718857 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718921 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.719432 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-config\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.719537 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.719978 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3180ec82-70eb-4837-9eed-a92e41e5e3fc-tls-assets\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.821898 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822017 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-config\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822064 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822116 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3180ec82-70eb-4837-9eed-a92e41e5e3fc-tls-assets\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822145 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3180ec82-70eb-4837-9eed-a92e41e5e3fc-config-out\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822167 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-web-config\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822197 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822274 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822311 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-brkjv\" (UniqueName: \"kubernetes.io/projected/3180ec82-70eb-4837-9eed-a92e41e5e3fc-kube-api-access-brkjv\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822352 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822379 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822410 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.824465 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: E0202 00:24:24.824644 5108 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 02 00:24:24 crc kubenswrapper[5108]: E0202 00:24:24.824749 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls podName:3180ec82-70eb-4837-9eed-a92e41e5e3fc nodeName:}" failed. No retries permitted until 2026-02-02 00:24:25.324726425 +0000 UTC m=+864.600223365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3180ec82-70eb-4837-9eed-a92e41e5e3fc") : secret "default-prometheus-proxy-tls" not found Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.825458 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.825827 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.826042 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.829970 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.830016 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f745048d2e71c93a548e077a7ba1794f9de151f8f7067605ba7384d3e5bae71c/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.833364 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3180ec82-70eb-4837-9eed-a92e41e5e3fc-tls-assets\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.833640 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-web-config\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.835691 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-config\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.837655 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3180ec82-70eb-4837-9eed-a92e41e5e3fc-config-out\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.845009 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.847745 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-brkjv\" (UniqueName: \"kubernetes.io/projected/3180ec82-70eb-4837-9eed-a92e41e5e3fc-kube-api-access-brkjv\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.876716 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:25 crc kubenswrapper[5108]: I0202 00:24:25.330130 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:25 crc kubenswrapper[5108]: E0202 00:24:25.330313 5108 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 02 00:24:25 crc kubenswrapper[5108]: E0202 00:24:25.330390 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls podName:3180ec82-70eb-4837-9eed-a92e41e5e3fc nodeName:}" failed. No retries permitted until 2026-02-02 00:24:26.330374662 +0000 UTC m=+865.605871582 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3180ec82-70eb-4837-9eed-a92e41e5e3fc") : secret "default-prometheus-proxy-tls" not found Feb 02 00:24:26 crc kubenswrapper[5108]: I0202 00:24:26.346248 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:26 crc kubenswrapper[5108]: I0202 00:24:26.351827 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:26 crc kubenswrapper[5108]: I0202 00:24:26.445067 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 02 00:24:26 crc kubenswrapper[5108]: I0202 00:24:26.747261 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 02 00:24:27 crc kubenswrapper[5108]: I0202 00:24:27.297607 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3180ec82-70eb-4837-9eed-a92e41e5e3fc","Type":"ContainerStarted","Data":"21816529956a895b1886c1da9681b3ad3a8c8ec009f5864512f2da090fdc8af4"} Feb 02 00:24:30 crc kubenswrapper[5108]: I0202 00:24:30.329995 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3180ec82-70eb-4837-9eed-a92e41e5e3fc","Type":"ContainerStarted","Data":"241c26cd2a74392762363fb6bdfd7db40fcbd0e3c90a3a038e12d62ada2fcf10"} Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.514162 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8"] Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.521394 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.525263 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8"] Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.567939 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w46v\" (UniqueName: \"kubernetes.io/projected/4431ddda-6bd1-43de-8d6e-c5829580e15e-kube-api-access-2w46v\") pod \"default-snmp-webhook-6774d8dfbc-sfrh8\" (UID: \"4431ddda-6bd1-43de-8d6e-c5829580e15e\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.669434 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2w46v\" (UniqueName: \"kubernetes.io/projected/4431ddda-6bd1-43de-8d6e-c5829580e15e-kube-api-access-2w46v\") pod \"default-snmp-webhook-6774d8dfbc-sfrh8\" (UID: \"4431ddda-6bd1-43de-8d6e-c5829580e15e\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.691348 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w46v\" (UniqueName: \"kubernetes.io/projected/4431ddda-6bd1-43de-8d6e-c5829580e15e-kube-api-access-2w46v\") pod \"default-snmp-webhook-6774d8dfbc-sfrh8\" (UID: \"4431ddda-6bd1-43de-8d6e-c5829580e15e\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.836345 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" Feb 02 00:24:35 crc kubenswrapper[5108]: I0202 00:24:35.281877 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8"] Feb 02 00:24:35 crc kubenswrapper[5108]: I0202 00:24:35.380938 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" event={"ID":"4431ddda-6bd1-43de-8d6e-c5829580e15e","Type":"ContainerStarted","Data":"0ed4782617619d2edca6daab3f32582a23837d60d35902016d6ae1f93645a7f5"} Feb 02 00:24:37 crc kubenswrapper[5108]: I0202 00:24:37.400655 5108 generic.go:358] "Generic (PLEG): container finished" podID="3180ec82-70eb-4837-9eed-a92e41e5e3fc" containerID="241c26cd2a74392762363fb6bdfd7db40fcbd0e3c90a3a038e12d62ada2fcf10" exitCode=0 Feb 02 00:24:37 crc kubenswrapper[5108]: I0202 00:24:37.400764 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3180ec82-70eb-4837-9eed-a92e41e5e3fc","Type":"ContainerDied","Data":"241c26cd2a74392762363fb6bdfd7db40fcbd0e3c90a3a038e12d62ada2fcf10"} Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.429553 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.465961 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.466218 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.469004 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.469254 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.469399 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.469475 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.469409 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.469617 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-76qhb\"" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530797 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xklx\" (UniqueName: \"kubernetes.io/projected/6d411794-541c-4416-bd08-cd4f26bc73cb-kube-api-access-4xklx\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530846 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6d411794-541c-4416-bd08-cd4f26bc73cb-config-out\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530907 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530927 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530943 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-config-volume\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530962 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6d411794-541c-4416-bd08-cd4f26bc73cb-tls-assets\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530977 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-web-config\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.531009 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.531048 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.627568 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h8vl8"] Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634489 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634620 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634656 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4xklx\" (UniqueName: \"kubernetes.io/projected/6d411794-541c-4416-bd08-cd4f26bc73cb-kube-api-access-4xklx\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634694 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6d411794-541c-4416-bd08-cd4f26bc73cb-config-out\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634774 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634791 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634811 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-config-volume\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634829 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6d411794-541c-4416-bd08-cd4f26bc73cb-tls-assets\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634846 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-web-config\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: E0202 00:24:38.647404 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 02 00:24:38 crc kubenswrapper[5108]: E0202 00:24:38.647624 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls podName:6d411794-541c-4416-bd08-cd4f26bc73cb nodeName:}" failed. No retries permitted until 2026-02-02 00:24:39.147598101 +0000 UTC m=+878.423095031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6d411794-541c-4416-bd08-cd4f26bc73cb") : secret "default-alertmanager-proxy-tls" not found Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.648471 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-web-config\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.650376 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6d411794-541c-4416-bd08-cd4f26bc73cb-tls-assets\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.650454 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-config-volume\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.651151 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h8vl8"] Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.651327 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.653895 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.663863 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.663903 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/37e7f4321b567342cda29f8152351e56127ff3b7d1ccfdb5a5304f7e4517adc3/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.664843 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xklx\" (UniqueName: \"kubernetes.io/projected/6d411794-541c-4416-bd08-cd4f26bc73cb-kube-api-access-4xklx\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.666698 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6d411794-541c-4416-bd08-cd4f26bc73cb-config-out\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.668128 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.696042 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.736380 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-utilities\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.736431 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrt86\" (UniqueName: \"kubernetes.io/projected/b43972ad-8935-44fe-a3cb-4ae69a48b27a-kube-api-access-hrt86\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.736482 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-catalog-content\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.838261 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-catalog-content\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.838428 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-utilities\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.838454 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hrt86\" (UniqueName: \"kubernetes.io/projected/b43972ad-8935-44fe-a3cb-4ae69a48b27a-kube-api-access-hrt86\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.839204 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-catalog-content\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.839281 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-utilities\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.860386 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrt86\" (UniqueName: \"kubernetes.io/projected/b43972ad-8935-44fe-a3cb-4ae69a48b27a-kube-api-access-hrt86\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:39 crc kubenswrapper[5108]: I0202 00:24:39.033285 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:39 crc kubenswrapper[5108]: I0202 00:24:39.244800 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:39 crc kubenswrapper[5108]: E0202 00:24:39.245018 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 02 00:24:39 crc kubenswrapper[5108]: E0202 00:24:39.245152 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls podName:6d411794-541c-4416-bd08-cd4f26bc73cb nodeName:}" failed. No retries permitted until 2026-02-02 00:24:40.245130663 +0000 UTC m=+879.520627593 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6d411794-541c-4416-bd08-cd4f26bc73cb") : secret "default-alertmanager-proxy-tls" not found Feb 02 00:24:40 crc kubenswrapper[5108]: I0202 00:24:40.260223 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:40 crc kubenswrapper[5108]: E0202 00:24:40.260424 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 02 00:24:40 crc kubenswrapper[5108]: E0202 00:24:40.260550 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls podName:6d411794-541c-4416-bd08-cd4f26bc73cb nodeName:}" failed. No retries permitted until 2026-02-02 00:24:42.260525979 +0000 UTC m=+881.536022969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6d411794-541c-4416-bd08-cd4f26bc73cb") : secret "default-alertmanager-proxy-tls" not found Feb 02 00:24:42 crc kubenswrapper[5108]: I0202 00:24:42.289766 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:42 crc kubenswrapper[5108]: I0202 00:24:42.303211 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:42 crc kubenswrapper[5108]: I0202 00:24:42.385756 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:43 crc kubenswrapper[5108]: I0202 00:24:43.467564 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h8vl8"] Feb 02 00:24:43 crc kubenswrapper[5108]: I0202 00:24:43.507782 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 02 00:24:43 crc kubenswrapper[5108]: W0202 00:24:43.570310 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb43972ad_8935_44fe_a3cb_4ae69a48b27a.slice/crio-144daf304af88934751889a69e636b73a0f5991ae80aef024d91db8efa15874f WatchSource:0}: Error finding container 144daf304af88934751889a69e636b73a0f5991ae80aef024d91db8efa15874f: Status 404 returned error can't find the container with id 144daf304af88934751889a69e636b73a0f5991ae80aef024d91db8efa15874f Feb 02 00:24:43 crc kubenswrapper[5108]: W0202 00:24:43.572648 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d411794_541c_4416_bd08_cd4f26bc73cb.slice/crio-11f2a58166965e469897b297e5b9503921b3d24fbde35fcc149456fdf2295ca7 WatchSource:0}: Error finding container 11f2a58166965e469897b297e5b9503921b3d24fbde35fcc149456fdf2295ca7: Status 404 returned error can't find the container with id 11f2a58166965e469897b297e5b9503921b3d24fbde35fcc149456fdf2295ca7 Feb 02 00:24:44 crc kubenswrapper[5108]: I0202 00:24:44.450138 5108 generic.go:358] "Generic (PLEG): container finished" podID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerID="229b394fd30cf7d76b0f95baebefc43c286c621c34e04a9822ceaf4d47ea4ecb" exitCode=0 Feb 02 00:24:44 crc kubenswrapper[5108]: I0202 00:24:44.450287 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8vl8" event={"ID":"b43972ad-8935-44fe-a3cb-4ae69a48b27a","Type":"ContainerDied","Data":"229b394fd30cf7d76b0f95baebefc43c286c621c34e04a9822ceaf4d47ea4ecb"} Feb 02 00:24:44 crc kubenswrapper[5108]: I0202 00:24:44.450531 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8vl8" event={"ID":"b43972ad-8935-44fe-a3cb-4ae69a48b27a","Type":"ContainerStarted","Data":"144daf304af88934751889a69e636b73a0f5991ae80aef024d91db8efa15874f"} Feb 02 00:24:44 crc kubenswrapper[5108]: I0202 00:24:44.455359 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" event={"ID":"4431ddda-6bd1-43de-8d6e-c5829580e15e","Type":"ContainerStarted","Data":"1881156c147b2c41cd6c0479734786a8b860c9cc836037ab9327d23883e7a18f"} Feb 02 00:24:44 crc kubenswrapper[5108]: I0202 00:24:44.456777 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6d411794-541c-4416-bd08-cd4f26bc73cb","Type":"ContainerStarted","Data":"11f2a58166965e469897b297e5b9503921b3d24fbde35fcc149456fdf2295ca7"} Feb 02 00:24:44 crc kubenswrapper[5108]: I0202 00:24:44.486973 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" podStartSLOduration=2.504582438 podStartE2EDuration="10.486958142s" podCreationTimestamp="2026-02-02 00:24:34 +0000 UTC" firstStartedPulling="2026-02-02 00:24:35.296638494 +0000 UTC m=+874.572135424" lastFinishedPulling="2026-02-02 00:24:43.279014198 +0000 UTC m=+882.554511128" observedRunningTime="2026-02-02 00:24:44.48250242 +0000 UTC m=+883.757999350" watchObservedRunningTime="2026-02-02 00:24:44.486958142 +0000 UTC m=+883.762455072" Feb 02 00:24:45 crc kubenswrapper[5108]: I0202 00:24:45.465823 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6d411794-541c-4416-bd08-cd4f26bc73cb","Type":"ContainerStarted","Data":"894dc7cb45e63e8f24935dbff4b899be81fc89008187602b3aa77cf89c213a58"} Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.669243 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4xj84"] Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.677497 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.681499 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xj84"] Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.765459 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkqxk\" (UniqueName: \"kubernetes.io/projected/da4c12ea-9e45-4b71-9f9a-565c93d8520f-kube-api-access-hkqxk\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.765776 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-utilities\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.765794 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-catalog-content\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.867083 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-utilities\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.867127 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-catalog-content\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.867211 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hkqxk\" (UniqueName: \"kubernetes.io/projected/da4c12ea-9e45-4b71-9f9a-565c93d8520f-kube-api-access-hkqxk\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.868066 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-utilities\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.868368 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-catalog-content\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.883991 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkqxk\" (UniqueName: \"kubernetes.io/projected/da4c12ea-9e45-4b71-9f9a-565c93d8520f-kube-api-access-hkqxk\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:48 crc kubenswrapper[5108]: I0202 00:24:48.066250 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:48 crc kubenswrapper[5108]: I0202 00:24:48.343478 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xj84"] Feb 02 00:24:48 crc kubenswrapper[5108]: I0202 00:24:48.492271 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xj84" event={"ID":"da4c12ea-9e45-4b71-9f9a-565c93d8520f","Type":"ContainerStarted","Data":"ed59f3102a80ca4b5a1d7c10be89cb344fd7a76759d5c3c7818e734032b6f019"} Feb 02 00:24:48 crc kubenswrapper[5108]: I0202 00:24:48.495212 5108 generic.go:358] "Generic (PLEG): container finished" podID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerID="abc0b81ac60fdf9242e7b8d30cb6c51ec290df312b9b70459e2737a2692347f4" exitCode=0 Feb 02 00:24:48 crc kubenswrapper[5108]: I0202 00:24:48.495264 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8vl8" event={"ID":"b43972ad-8935-44fe-a3cb-4ae69a48b27a","Type":"ContainerDied","Data":"abc0b81ac60fdf9242e7b8d30cb6c51ec290df312b9b70459e2737a2692347f4"} Feb 02 00:24:48 crc kubenswrapper[5108]: I0202 00:24:48.498750 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3180ec82-70eb-4837-9eed-a92e41e5e3fc","Type":"ContainerStarted","Data":"c042ab2ed34c7be32865470364274d03a9e7b7842d9354a7980bc87c6a237a84"} Feb 02 00:24:49 crc kubenswrapper[5108]: I0202 00:24:49.506541 5108 generic.go:358] "Generic (PLEG): container finished" podID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerID="aad8617a916aa584794ca1e18d38b92126d401c0258d25de6e56883166b73b19" exitCode=0 Feb 02 00:24:49 crc kubenswrapper[5108]: I0202 00:24:49.506594 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xj84" event={"ID":"da4c12ea-9e45-4b71-9f9a-565c93d8520f","Type":"ContainerDied","Data":"aad8617a916aa584794ca1e18d38b92126d401c0258d25de6e56883166b73b19"} Feb 02 00:24:49 crc kubenswrapper[5108]: I0202 00:24:49.512775 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8vl8" event={"ID":"b43972ad-8935-44fe-a3cb-4ae69a48b27a","Type":"ContainerStarted","Data":"27b7b7465364708570b1cd87ef744a8155219cb88c0a7e8f6c5a38ca4801d2d0"} Feb 02 00:24:49 crc kubenswrapper[5108]: I0202 00:24:49.546030 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h8vl8" podStartSLOduration=8.414851688 podStartE2EDuration="11.546012881s" podCreationTimestamp="2026-02-02 00:24:38 +0000 UTC" firstStartedPulling="2026-02-02 00:24:44.450935046 +0000 UTC m=+883.726431976" lastFinishedPulling="2026-02-02 00:24:47.582096229 +0000 UTC m=+886.857593169" observedRunningTime="2026-02-02 00:24:49.540059358 +0000 UTC m=+888.815556308" watchObservedRunningTime="2026-02-02 00:24:49.546012881 +0000 UTC m=+888.821509811" Feb 02 00:24:50 crc kubenswrapper[5108]: I0202 00:24:50.534064 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3180ec82-70eb-4837-9eed-a92e41e5e3fc","Type":"ContainerStarted","Data":"0b478e30864ece92a76348a619a09232cd0dc6be617f1ff16f5fbab47f0733d4"} Feb 02 00:24:50 crc kubenswrapper[5108]: I0202 00:24:50.918914 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:24:50 crc kubenswrapper[5108]: I0202 00:24:50.919598 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.544541 5108 generic.go:358] "Generic (PLEG): container finished" podID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerID="3b84a204056dd493507c5261ca60e0264a1a9ff8476ab36754509baeb69d95fb" exitCode=0 Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.544640 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xj84" event={"ID":"da4c12ea-9e45-4b71-9f9a-565c93d8520f","Type":"ContainerDied","Data":"3b84a204056dd493507c5261ca60e0264a1a9ff8476ab36754509baeb69d95fb"} Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.807763 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp"] Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.820639 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp"] Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.820779 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.823737 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.823813 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.824299 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-vnkgz\"" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.835484 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.932145 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.932326 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj2jz\" (UniqueName: \"kubernetes.io/projected/effd2c87-a358-47ac-869d-e9b26a40cb11-kube-api-access-gj2jz\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.932378 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/effd2c87-a358-47ac-869d-e9b26a40cb11-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.932569 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.932601 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/effd2c87-a358-47ac-869d-e9b26a40cb11-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.034428 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.034509 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gj2jz\" (UniqueName: \"kubernetes.io/projected/effd2c87-a358-47ac-869d-e9b26a40cb11-kube-api-access-gj2jz\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.034533 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/effd2c87-a358-47ac-869d-e9b26a40cb11-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.034590 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.034615 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/effd2c87-a358-47ac-869d-e9b26a40cb11-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.035775 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/effd2c87-a358-47ac-869d-e9b26a40cb11-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: E0202 00:24:52.036604 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 02 00:24:52 crc kubenswrapper[5108]: E0202 00:24:52.036714 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls podName:effd2c87-a358-47ac-869d-e9b26a40cb11 nodeName:}" failed. No retries permitted until 2026-02-02 00:24:52.536688327 +0000 UTC m=+891.812185317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-2fppp" (UID: "effd2c87-a358-47ac-869d-e9b26a40cb11") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.037787 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/effd2c87-a358-47ac-869d-e9b26a40cb11-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.041898 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.055636 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj2jz\" (UniqueName: \"kubernetes.io/projected/effd2c87-a358-47ac-869d-e9b26a40cb11-kube-api-access-gj2jz\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.541015 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: E0202 00:24:52.541243 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 02 00:24:52 crc kubenswrapper[5108]: E0202 00:24:52.541347 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls podName:effd2c87-a358-47ac-869d-e9b26a40cb11 nodeName:}" failed. No retries permitted until 2026-02-02 00:24:53.541319906 +0000 UTC m=+892.816816866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-2fppp" (UID: "effd2c87-a358-47ac-869d-e9b26a40cb11") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.555436 5108 generic.go:358] "Generic (PLEG): container finished" podID="6d411794-541c-4416-bd08-cd4f26bc73cb" containerID="894dc7cb45e63e8f24935dbff4b899be81fc89008187602b3aa77cf89c213a58" exitCode=0 Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.555535 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6d411794-541c-4416-bd08-cd4f26bc73cb","Type":"ContainerDied","Data":"894dc7cb45e63e8f24935dbff4b899be81fc89008187602b3aa77cf89c213a58"} Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.561262 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xj84" event={"ID":"da4c12ea-9e45-4b71-9f9a-565c93d8520f","Type":"ContainerStarted","Data":"b62dad9325f662f5d1c0f96bbd9b470ceb240033582f27bd1a3313244689f499"} Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.607652 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4xj84" podStartSLOduration=4.6414505120000005 podStartE2EDuration="5.607630861s" podCreationTimestamp="2026-02-02 00:24:47 +0000 UTC" firstStartedPulling="2026-02-02 00:24:49.507508067 +0000 UTC m=+888.783004997" lastFinishedPulling="2026-02-02 00:24:50.473688376 +0000 UTC m=+889.749185346" observedRunningTime="2026-02-02 00:24:52.602947423 +0000 UTC m=+891.878444353" watchObservedRunningTime="2026-02-02 00:24:52.607630861 +0000 UTC m=+891.883127791" Feb 02 00:24:53 crc kubenswrapper[5108]: I0202 00:24:53.555944 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:53 crc kubenswrapper[5108]: I0202 00:24:53.568533 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:53 crc kubenswrapper[5108]: I0202 00:24:53.637801 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.651316 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k"] Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.700408 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k"] Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.700653 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.704698 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.711725 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.896390 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9fccb2ea-b40e-4375-81bf-1bedc36fd526-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.896504 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9fccb2ea-b40e-4375-81bf-1bedc36fd526-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.896535 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnfmw\" (UniqueName: \"kubernetes.io/projected/9fccb2ea-b40e-4375-81bf-1bedc36fd526-kube-api-access-tnfmw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.896565 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.896593 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.998051 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9fccb2ea-b40e-4375-81bf-1bedc36fd526-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.998097 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tnfmw\" (UniqueName: \"kubernetes.io/projected/9fccb2ea-b40e-4375-81bf-1bedc36fd526-kube-api-access-tnfmw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.998125 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.998146 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.998216 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9fccb2ea-b40e-4375-81bf-1bedc36fd526-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.999126 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9fccb2ea-b40e-4375-81bf-1bedc36fd526-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.999431 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9fccb2ea-b40e-4375-81bf-1bedc36fd526-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:55 crc kubenswrapper[5108]: E0202 00:24:55.000429 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 02 00:24:55 crc kubenswrapper[5108]: E0202 00:24:55.000505 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls podName:9fccb2ea-b40e-4375-81bf-1bedc36fd526 nodeName:}" failed. No retries permitted until 2026-02-02 00:24:55.50048995 +0000 UTC m=+894.775986890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" (UID: "9fccb2ea-b40e-4375-81bf-1bedc36fd526") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 02 00:24:55 crc kubenswrapper[5108]: I0202 00:24:55.005472 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:55 crc kubenswrapper[5108]: I0202 00:24:55.031924 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnfmw\" (UniqueName: \"kubernetes.io/projected/9fccb2ea-b40e-4375-81bf-1bedc36fd526-kube-api-access-tnfmw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:55 crc kubenswrapper[5108]: I0202 00:24:55.506324 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:55 crc kubenswrapper[5108]: E0202 00:24:55.506678 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 02 00:24:55 crc kubenswrapper[5108]: E0202 00:24:55.506842 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls podName:9fccb2ea-b40e-4375-81bf-1bedc36fd526 nodeName:}" failed. No retries permitted until 2026-02-02 00:24:56.506810636 +0000 UTC m=+895.782307566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" (UID: "9fccb2ea-b40e-4375-81bf-1bedc36fd526") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 02 00:24:56 crc kubenswrapper[5108]: I0202 00:24:56.524688 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:56 crc kubenswrapper[5108]: I0202 00:24:56.532321 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:56 crc kubenswrapper[5108]: I0202 00:24:56.829551 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.066439 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.067773 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.136936 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.582888 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp"] Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.609462 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k"] Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.671864 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.729784 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xj84"] Feb 02 00:24:58 crc kubenswrapper[5108]: W0202 00:24:58.948046 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeffd2c87_a358_47ac_869d_e9b26a40cb11.slice/crio-bd66e511c09b936b12a46c73d0fdbc272762b17802192333046059f1bbf07a82 WatchSource:0}: Error finding container bd66e511c09b936b12a46c73d0fdbc272762b17802192333046059f1bbf07a82: Status 404 returned error can't find the container with id bd66e511c09b936b12a46c73d0fdbc272762b17802192333046059f1bbf07a82 Feb 02 00:24:59 crc kubenswrapper[5108]: I0202 00:24:59.033386 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:59 crc kubenswrapper[5108]: I0202 00:24:59.033480 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:59 crc kubenswrapper[5108]: I0202 00:24:59.078387 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:59 crc kubenswrapper[5108]: I0202 00:24:59.542321 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4"] Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.922926 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" event={"ID":"9fccb2ea-b40e-4375-81bf-1bedc36fd526","Type":"ContainerStarted","Data":"07fd0b579f2088cc2eed006074ff62a37311f5d1c4dcda24d9af854d6be0e53c"} Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.923327 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4"] Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.923405 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" event={"ID":"effd2c87-a358-47ac-869d-e9b26a40cb11","Type":"ContainerStarted","Data":"bd66e511c09b936b12a46c73d0fdbc272762b17802192333046059f1bbf07a82"} Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.924127 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.928486 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.932601 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.982824 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.989602 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/095466f0-3dfb-4daf-809c-188de8da2ee9-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.989751 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/095466f0-3dfb-4daf-809c-188de8da2ee9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.989822 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsf5h\" (UniqueName: \"kubernetes.io/projected/095466f0-3dfb-4daf-809c-188de8da2ee9-kube-api-access-wsf5h\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.989843 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/095466f0-3dfb-4daf-809c-188de8da2ee9-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.989909 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/095466f0-3dfb-4daf-809c-188de8da2ee9-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.091502 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/095466f0-3dfb-4daf-809c-188de8da2ee9-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.091848 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/095466f0-3dfb-4daf-809c-188de8da2ee9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.092205 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wsf5h\" (UniqueName: \"kubernetes.io/projected/095466f0-3dfb-4daf-809c-188de8da2ee9-kube-api-access-wsf5h\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.092315 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/095466f0-3dfb-4daf-809c-188de8da2ee9-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.092424 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/095466f0-3dfb-4daf-809c-188de8da2ee9-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.093348 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/095466f0-3dfb-4daf-809c-188de8da2ee9-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.093614 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/095466f0-3dfb-4daf-809c-188de8da2ee9-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.100378 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/095466f0-3dfb-4daf-809c-188de8da2ee9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.100853 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/095466f0-3dfb-4daf-809c-188de8da2ee9-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.110882 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsf5h\" (UniqueName: \"kubernetes.io/projected/095466f0-3dfb-4daf-809c-188de8da2ee9-kube-api-access-wsf5h\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.247254 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.645513 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4xj84" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="registry-server" containerID="cri-o://b62dad9325f662f5d1c0f96bbd9b470ceb240033582f27bd1a3313244689f499" gracePeriod=2 Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.794757 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h8vl8"] Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.243445 5108 scope.go:117] "RemoveContainer" containerID="ff61ff81d7abb5723358d9eb219b89d933545279f212b14a8a7b31b99a0fd8b3" Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.370664 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q22wv_24f8cedc-9b82-4ef7-a7db-4ce57803e0ce/kube-multus/0.log" Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.371507 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q22wv_24f8cedc-9b82-4ef7-a7db-4ce57803e0ce/kube-multus/0.log" Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.390936 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.391710 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.653460 5108 generic.go:358] "Generic (PLEG): container finished" podID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerID="b62dad9325f662f5d1c0f96bbd9b470ceb240033582f27bd1a3313244689f499" exitCode=0 Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.653554 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xj84" event={"ID":"da4c12ea-9e45-4b71-9f9a-565c93d8520f","Type":"ContainerDied","Data":"b62dad9325f662f5d1c0f96bbd9b470ceb240033582f27bd1a3313244689f499"} Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.654212 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h8vl8" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="registry-server" containerID="cri-o://27b7b7465364708570b1cd87ef744a8155219cb88c0a7e8f6c5a38ca4801d2d0" gracePeriod=2 Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.659107 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4"] Feb 02 00:25:03 crc kubenswrapper[5108]: I0202 00:25:03.663720 5108 generic.go:358] "Generic (PLEG): container finished" podID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerID="27b7b7465364708570b1cd87ef744a8155219cb88c0a7e8f6c5a38ca4801d2d0" exitCode=0 Feb 02 00:25:03 crc kubenswrapper[5108]: I0202 00:25:03.663778 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8vl8" event={"ID":"b43972ad-8935-44fe-a3cb-4ae69a48b27a","Type":"ContainerDied","Data":"27b7b7465364708570b1cd87ef744a8155219cb88c0a7e8f6c5a38ca4801d2d0"} Feb 02 00:25:03 crc kubenswrapper[5108]: I0202 00:25:03.666334 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" event={"ID":"095466f0-3dfb-4daf-809c-188de8da2ee9","Type":"ContainerStarted","Data":"796252c50d779923668248d528a631e06b4dc9dac627170e9a8bc66a407054a6"} Feb 02 00:25:03 crc kubenswrapper[5108]: I0202 00:25:03.996525 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.064481 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-catalog-content\") pod \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.064644 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-utilities\") pod \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.064851 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrt86\" (UniqueName: \"kubernetes.io/projected/b43972ad-8935-44fe-a3cb-4ae69a48b27a-kube-api-access-hrt86\") pod \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.066797 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-utilities" (OuterVolumeSpecName: "utilities") pod "b43972ad-8935-44fe-a3cb-4ae69a48b27a" (UID: "b43972ad-8935-44fe-a3cb-4ae69a48b27a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.083043 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43972ad-8935-44fe-a3cb-4ae69a48b27a-kube-api-access-hrt86" (OuterVolumeSpecName: "kube-api-access-hrt86") pod "b43972ad-8935-44fe-a3cb-4ae69a48b27a" (UID: "b43972ad-8935-44fe-a3cb-4ae69a48b27a"). InnerVolumeSpecName "kube-api-access-hrt86". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.111065 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b43972ad-8935-44fe-a3cb-4ae69a48b27a" (UID: "b43972ad-8935-44fe-a3cb-4ae69a48b27a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.166511 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrt86\" (UniqueName: \"kubernetes.io/projected/b43972ad-8935-44fe-a3cb-4ae69a48b27a-kube-api-access-hrt86\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.166548 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.166561 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.535949 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.675338 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-utilities\") pod \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.675428 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-catalog-content\") pod \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.675476 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.675498 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkqxk\" (UniqueName: \"kubernetes.io/projected/da4c12ea-9e45-4b71-9f9a-565c93d8520f-kube-api-access-hkqxk\") pod \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.675477 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xj84" event={"ID":"da4c12ea-9e45-4b71-9f9a-565c93d8520f","Type":"ContainerDied","Data":"ed59f3102a80ca4b5a1d7c10be89cb344fd7a76759d5c3c7818e734032b6f019"} Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.675964 5108 scope.go:117] "RemoveContainer" containerID="b62dad9325f662f5d1c0f96bbd9b470ceb240033582f27bd1a3313244689f499" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.676185 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-utilities" (OuterVolumeSpecName: "utilities") pod "da4c12ea-9e45-4b71-9f9a-565c93d8520f" (UID: "da4c12ea-9e45-4b71-9f9a-565c93d8520f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.682269 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4c12ea-9e45-4b71-9f9a-565c93d8520f-kube-api-access-hkqxk" (OuterVolumeSpecName: "kube-api-access-hkqxk") pod "da4c12ea-9e45-4b71-9f9a-565c93d8520f" (UID: "da4c12ea-9e45-4b71-9f9a-565c93d8520f"). InnerVolumeSpecName "kube-api-access-hkqxk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.687658 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8vl8" event={"ID":"b43972ad-8935-44fe-a3cb-4ae69a48b27a","Type":"ContainerDied","Data":"144daf304af88934751889a69e636b73a0f5991ae80aef024d91db8efa15874f"} Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.687798 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.716671 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h8vl8"] Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.725052 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h8vl8"] Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.735538 5108 scope.go:117] "RemoveContainer" containerID="3b84a204056dd493507c5261ca60e0264a1a9ff8476ab36754509baeb69d95fb" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.736323 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da4c12ea-9e45-4b71-9f9a-565c93d8520f" (UID: "da4c12ea-9e45-4b71-9f9a-565c93d8520f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.776907 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.777144 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.777209 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hkqxk\" (UniqueName: \"kubernetes.io/projected/da4c12ea-9e45-4b71-9f9a-565c93d8520f-kube-api-access-hkqxk\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.957334 5108 scope.go:117] "RemoveContainer" containerID="aad8617a916aa584794ca1e18d38b92126d401c0258d25de6e56883166b73b19" Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.033522 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xj84"] Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.040377 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4xj84"] Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.352044 5108 scope.go:117] "RemoveContainer" containerID="27b7b7465364708570b1cd87ef744a8155219cb88c0a7e8f6c5a38ca4801d2d0" Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.436209 5108 scope.go:117] "RemoveContainer" containerID="abc0b81ac60fdf9242e7b8d30cb6c51ec290df312b9b70459e2737a2692347f4" Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.457446 5108 scope.go:117] "RemoveContainer" containerID="229b394fd30cf7d76b0f95baebefc43c286c621c34e04a9822ceaf4d47ea4ecb" Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.571649 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" path="/var/lib/kubelet/pods/b43972ad-8935-44fe-a3cb-4ae69a48b27a/volumes" Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.573176 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" path="/var/lib/kubelet/pods/da4c12ea-9e45-4b71-9f9a-565c93d8520f/volumes" Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.697329 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3180ec82-70eb-4837-9eed-a92e41e5e3fc","Type":"ContainerStarted","Data":"00027b0d1ecfc071bcee298c391117d525ebc12f1a7d258d2046be39d16f353a"} Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.701031 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" event={"ID":"9fccb2ea-b40e-4375-81bf-1bedc36fd526","Type":"ContainerStarted","Data":"60261bddc1358cb6371c6231f83867738c6f2a1c889df2042ce82b466ef763c2"} Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.730850 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=6.624462238 podStartE2EDuration="42.730833841s" podCreationTimestamp="2026-02-02 00:24:23 +0000 UTC" firstStartedPulling="2026-02-02 00:24:26.750845963 +0000 UTC m=+866.026342893" lastFinishedPulling="2026-02-02 00:25:02.857217556 +0000 UTC m=+902.132714496" observedRunningTime="2026-02-02 00:25:05.730358238 +0000 UTC m=+905.005855188" watchObservedRunningTime="2026-02-02 00:25:05.730833841 +0000 UTC m=+905.006330761" Feb 02 00:25:06 crc kubenswrapper[5108]: I0202 00:25:06.445671 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Feb 02 00:25:06 crc kubenswrapper[5108]: I0202 00:25:06.716901 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6d411794-541c-4416-bd08-cd4f26bc73cb","Type":"ContainerStarted","Data":"5c839b40765bcc1d7216fe8932863226774aa07227d24c3ecd883e030671bac5"} Feb 02 00:25:06 crc kubenswrapper[5108]: I0202 00:25:06.720447 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" event={"ID":"effd2c87-a358-47ac-869d-e9b26a40cb11","Type":"ContainerStarted","Data":"73ff2fa5277767b23d1c00f8c9dcfb2ff38f4efd2e94c1f9000405b6bef8ab78"} Feb 02 00:25:06 crc kubenswrapper[5108]: I0202 00:25:06.726239 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" event={"ID":"095466f0-3dfb-4daf-809c-188de8da2ee9","Type":"ContainerStarted","Data":"7e047dd562c1a4096c3937885d7b4893c158027ce5820089513e15e2bd1936d7"} Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.580674 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2"] Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581330 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="extract-content" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581344 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="extract-content" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581365 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="extract-content" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581371 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="extract-content" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581379 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="registry-server" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581386 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="registry-server" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581403 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="registry-server" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581409 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="registry-server" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581420 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="extract-utilities" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581425 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="extract-utilities" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581438 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="extract-utilities" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581443 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="extract-utilities" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581548 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="registry-server" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581560 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="registry-server" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.587351 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.590309 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2"] Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.590948 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.591575 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.724404 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/69974414-b4a3-48b4-ad93-b7b855ee08ea-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.724473 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7scx9\" (UniqueName: \"kubernetes.io/projected/69974414-b4a3-48b4-ad93-b7b855ee08ea-kube-api-access-7scx9\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.724513 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/69974414-b4a3-48b4-ad93-b7b855ee08ea-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.724556 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/69974414-b4a3-48b4-ad93-b7b855ee08ea-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.826514 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/69974414-b4a3-48b4-ad93-b7b855ee08ea-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.826550 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7scx9\" (UniqueName: \"kubernetes.io/projected/69974414-b4a3-48b4-ad93-b7b855ee08ea-kube-api-access-7scx9\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.826857 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/69974414-b4a3-48b4-ad93-b7b855ee08ea-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.826886 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/69974414-b4a3-48b4-ad93-b7b855ee08ea-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.829359 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/69974414-b4a3-48b4-ad93-b7b855ee08ea-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.832218 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/69974414-b4a3-48b4-ad93-b7b855ee08ea-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.838168 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.847769 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/69974414-b4a3-48b4-ad93-b7b855ee08ea-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.881948 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7scx9\" (UniqueName: \"kubernetes.io/projected/69974414-b4a3-48b4-ad93-b7b855ee08ea-kube-api-access-7scx9\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.915360 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.088852 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv"] Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.099656 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv"] Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.099801 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.106613 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.234429 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7a85d430-d592-4eee-99f4-89aea943a820-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.234525 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49fcb\" (UniqueName: \"kubernetes.io/projected/7a85d430-d592-4eee-99f4-89aea943a820-kube-api-access-49fcb\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.234685 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7a85d430-d592-4eee-99f4-89aea943a820-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.235155 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7a85d430-d592-4eee-99f4-89aea943a820-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.337755 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-49fcb\" (UniqueName: \"kubernetes.io/projected/7a85d430-d592-4eee-99f4-89aea943a820-kube-api-access-49fcb\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.337837 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7a85d430-d592-4eee-99f4-89aea943a820-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.337952 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7a85d430-d592-4eee-99f4-89aea943a820-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.338024 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7a85d430-d592-4eee-99f4-89aea943a820-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.339099 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7a85d430-d592-4eee-99f4-89aea943a820-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.339920 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7a85d430-d592-4eee-99f4-89aea943a820-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.345592 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7a85d430-d592-4eee-99f4-89aea943a820-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.358005 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-49fcb\" (UniqueName: \"kubernetes.io/projected/7a85d430-d592-4eee-99f4-89aea943a820-kube-api-access-49fcb\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.408017 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2"] Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.432851 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.753986 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6d411794-541c-4416-bd08-cd4f26bc73cb","Type":"ContainerStarted","Data":"a3fb1e51380c85f9d6cc72dd9e531b5eaed4c864380caf334bfa66c037ce1bd8"} Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.754285 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6d411794-541c-4416-bd08-cd4f26bc73cb","Type":"ContainerStarted","Data":"f34fa2ecfdc2b2904d8b3e00ca6f8ce1670030587199248f717fb7f8dc0539a5"} Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.757579 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" event={"ID":"69974414-b4a3-48b4-ad93-b7b855ee08ea","Type":"ContainerStarted","Data":"9dfcdbec11103be6db6c2157a6425885febd77f0bb5b9849868fd748bf1f38b0"} Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.912509 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=16.037770339 podStartE2EDuration="31.91248217s" podCreationTimestamp="2026-02-02 00:24:37 +0000 UTC" firstStartedPulling="2026-02-02 00:24:52.556701627 +0000 UTC m=+891.832198557" lastFinishedPulling="2026-02-02 00:25:08.431413458 +0000 UTC m=+907.706910388" observedRunningTime="2026-02-02 00:25:08.775298888 +0000 UTC m=+908.050795818" watchObservedRunningTime="2026-02-02 00:25:08.91248217 +0000 UTC m=+908.187979100" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.920414 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv"] Feb 02 00:25:11 crc kubenswrapper[5108]: I0202 00:25:11.445394 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 02 00:25:11 crc kubenswrapper[5108]: I0202 00:25:11.494774 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 02 00:25:11 crc kubenswrapper[5108]: I0202 00:25:11.816770 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 02 00:25:12 crc kubenswrapper[5108]: W0202 00:25:12.365146 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a85d430_d592_4eee_99f4_89aea943a820.slice/crio-9e7251ee47fb6e1ef3288d518188b085cc8dd420eaf16a8231d81e3f6ac81c89 WatchSource:0}: Error finding container 9e7251ee47fb6e1ef3288d518188b085cc8dd420eaf16a8231d81e3f6ac81c89: Status 404 returned error can't find the container with id 9e7251ee47fb6e1ef3288d518188b085cc8dd420eaf16a8231d81e3f6ac81c89 Feb 02 00:25:12 crc kubenswrapper[5108]: I0202 00:25:12.804039 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" event={"ID":"7a85d430-d592-4eee-99f4-89aea943a820","Type":"ContainerStarted","Data":"9e7251ee47fb6e1ef3288d518188b085cc8dd420eaf16a8231d81e3f6ac81c89"} Feb 02 00:25:13 crc kubenswrapper[5108]: I0202 00:25:13.812561 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" event={"ID":"7a85d430-d592-4eee-99f4-89aea943a820","Type":"ContainerStarted","Data":"5b966889339391ad5d1c58ffdd96cca6c66b2241f74216278cd1c8d7a429186f"} Feb 02 00:25:13 crc kubenswrapper[5108]: I0202 00:25:13.818628 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" event={"ID":"9fccb2ea-b40e-4375-81bf-1bedc36fd526","Type":"ContainerStarted","Data":"50f66ae7b1198518f36c4f7c0b2ac204ea13d743efd2a28463532e9ab85cdc6b"} Feb 02 00:25:13 crc kubenswrapper[5108]: I0202 00:25:13.824818 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" event={"ID":"effd2c87-a358-47ac-869d-e9b26a40cb11","Type":"ContainerStarted","Data":"e4ed96cabaa8a92966a36fb9578a8f60e5e271a49ba4cf3ce82a49924816b94d"} Feb 02 00:25:13 crc kubenswrapper[5108]: I0202 00:25:13.828496 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" event={"ID":"095466f0-3dfb-4daf-809c-188de8da2ee9","Type":"ContainerStarted","Data":"796732d5918e79323e00815611ebf68a7c6940165d8970726370476dcd69dadd"} Feb 02 00:25:13 crc kubenswrapper[5108]: I0202 00:25:13.832601 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" event={"ID":"69974414-b4a3-48b4-ad93-b7b855ee08ea","Type":"ContainerStarted","Data":"219d6aaa4b6711ff4073da8170cdf099f0a8e4eb465af71ad32107c2ea1fb1b7"} Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.874501 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" event={"ID":"7a85d430-d592-4eee-99f4-89aea943a820","Type":"ContainerStarted","Data":"6edea8ab5eaf6fb252b78d4ed128752b4746309e765259c0adb7ff2ebd8440b6"} Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.877521 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" event={"ID":"9fccb2ea-b40e-4375-81bf-1bedc36fd526","Type":"ContainerStarted","Data":"186c748fd29aa9602cfdbcbc177ff1f08033051353b5259a7d1c614462eec6d1"} Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.880834 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" event={"ID":"effd2c87-a358-47ac-869d-e9b26a40cb11","Type":"ContainerStarted","Data":"2e70b2446cf03e6a5ee77e0cf0a4dc86cdd0a17b3fa16cac2a8fa9c257064b12"} Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.883602 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" event={"ID":"095466f0-3dfb-4daf-809c-188de8da2ee9","Type":"ContainerStarted","Data":"f5407960d70a5c2c1c3c605f67915b96b9641538a414d2d0428317578aa15cb4"} Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.885816 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" event={"ID":"69974414-b4a3-48b4-ad93-b7b855ee08ea","Type":"ContainerStarted","Data":"252a1e5bac19051aa1231541315e513b166fbd6bc61dfd2554faea416e055edb"} Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.904169 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" podStartSLOduration=5.204066132 podStartE2EDuration="10.904011374s" podCreationTimestamp="2026-02-02 00:25:08 +0000 UTC" firstStartedPulling="2026-02-02 00:25:12.367966039 +0000 UTC m=+911.643462989" lastFinishedPulling="2026-02-02 00:25:18.067911301 +0000 UTC m=+917.343408231" observedRunningTime="2026-02-02 00:25:18.897044896 +0000 UTC m=+918.172541886" watchObservedRunningTime="2026-02-02 00:25:18.904011374 +0000 UTC m=+918.179508344" Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.947123 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" podStartSLOduration=2.220597061 podStartE2EDuration="11.94710287s" podCreationTimestamp="2026-02-02 00:25:07 +0000 UTC" firstStartedPulling="2026-02-02 00:25:08.425877269 +0000 UTC m=+907.701374199" lastFinishedPulling="2026-02-02 00:25:18.152383068 +0000 UTC m=+917.427880008" observedRunningTime="2026-02-02 00:25:18.932342884 +0000 UTC m=+918.207839884" watchObservedRunningTime="2026-02-02 00:25:18.94710287 +0000 UTC m=+918.222599800" Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.951287 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" podStartSLOduration=8.806619759 podStartE2EDuration="27.951269951s" podCreationTimestamp="2026-02-02 00:24:51 +0000 UTC" firstStartedPulling="2026-02-02 00:24:58.950285175 +0000 UTC m=+898.225782105" lastFinishedPulling="2026-02-02 00:25:18.094935367 +0000 UTC m=+917.370432297" observedRunningTime="2026-02-02 00:25:18.948116317 +0000 UTC m=+918.223613247" watchObservedRunningTime="2026-02-02 00:25:18.951269951 +0000 UTC m=+918.226766891" Feb 02 00:25:19 crc kubenswrapper[5108]: I0202 00:25:19.005777 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" podStartSLOduration=4.62591278 podStartE2EDuration="20.005760375s" podCreationTimestamp="2026-02-02 00:24:59 +0000 UTC" firstStartedPulling="2026-02-02 00:25:02.667084184 +0000 UTC m=+901.942581114" lastFinishedPulling="2026-02-02 00:25:18.046931779 +0000 UTC m=+917.322428709" observedRunningTime="2026-02-02 00:25:18.971744932 +0000 UTC m=+918.247241932" watchObservedRunningTime="2026-02-02 00:25:19.005760375 +0000 UTC m=+918.281257305" Feb 02 00:25:19 crc kubenswrapper[5108]: I0202 00:25:19.010874 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" podStartSLOduration=5.844297165 podStartE2EDuration="25.010865282s" podCreationTimestamp="2026-02-02 00:24:54 +0000 UTC" firstStartedPulling="2026-02-02 00:24:58.955312703 +0000 UTC m=+898.230809633" lastFinishedPulling="2026-02-02 00:25:18.12188082 +0000 UTC m=+917.397377750" observedRunningTime="2026-02-02 00:25:19.003807292 +0000 UTC m=+918.279304222" watchObservedRunningTime="2026-02-02 00:25:19.010865282 +0000 UTC m=+918.286362212" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.332891 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xsgkr"] Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.333174 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" podUID="22703395-ebd0-469b-aec4-b703ed4a8e65" containerName="default-interconnect" containerID="cri-o://6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1" gracePeriod=30 Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.724327 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.752129 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-7pdq9"] Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.752828 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="22703395-ebd0-469b-aec4-b703ed4a8e65" containerName="default-interconnect" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.752847 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="22703395-ebd0-469b-aec4-b703ed4a8e65" containerName="default-interconnect" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.753006 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="22703395-ebd0-469b-aec4-b703ed4a8e65" containerName="default-interconnect" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.757947 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.772331 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-7pdq9"] Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.835845 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-users\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.835939 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-ca\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836038 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-credentials\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836087 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-credentials\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836122 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-ca\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836158 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b88kl\" (UniqueName: \"kubernetes.io/projected/22703395-ebd0-469b-aec4-b703ed4a8e65-kube-api-access-b88kl\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836183 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-config\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836337 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836377 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836399 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836443 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-sasl-config\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836561 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836582 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tmm5\" (UniqueName: \"kubernetes.io/projected/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-kube-api-access-9tmm5\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836597 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-sasl-users\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.837363 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.842068 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22703395-ebd0-469b-aec4-b703ed4a8e65-kube-api-access-b88kl" (OuterVolumeSpecName: "kube-api-access-b88kl") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "kube-api-access-b88kl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.844395 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.844466 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.845007 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.845089 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.845656 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.901433 5108 generic.go:358] "Generic (PLEG): container finished" podID="22703395-ebd0-469b-aec4-b703ed4a8e65" containerID="6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1" exitCode=0 Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.901485 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.901526 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" event={"ID":"22703395-ebd0-469b-aec4-b703ed4a8e65","Type":"ContainerDied","Data":"6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1"} Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.901578 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" event={"ID":"22703395-ebd0-469b-aec4-b703ed4a8e65","Type":"ContainerDied","Data":"be460dd189cbfc5a2a37f3ba1e3bf4c61862c2876dd659904fe0292f2bbf5517"} Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.901600 5108 scope.go:117] "RemoveContainer" containerID="6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.922348 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.922421 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.922469 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.923001 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"795679bf9de717c5d31e446059babc25599991e8481de54f0dc1309c13af937d"} pod="openshift-machine-config-operator/machine-config-daemon-d74m7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.923052 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" containerID="cri-o://795679bf9de717c5d31e446059babc25599991e8481de54f0dc1309c13af937d" gracePeriod=600 Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.926573 5108 scope.go:117] "RemoveContainer" containerID="6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1" Feb 02 00:25:20 crc kubenswrapper[5108]: E0202 00:25:20.928876 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1\": container with ID starting with 6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1 not found: ID does not exist" containerID="6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.928921 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1"} err="failed to get container status \"6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1\": rpc error: code = NotFound desc = could not find container \"6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1\": container with ID starting with 6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1 not found: ID does not exist" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.937747 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xsgkr"] Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938479 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938595 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938616 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938655 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-sasl-config\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938691 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938711 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tmm5\" (UniqueName: \"kubernetes.io/projected/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-kube-api-access-9tmm5\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938732 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-sasl-users\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938783 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938794 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938803 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b88kl\" (UniqueName: \"kubernetes.io/projected/22703395-ebd0-469b-aec4-b703ed4a8e65-kube-api-access-b88kl\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938812 5108 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938821 5108 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938830 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938839 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.942216 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xsgkr"] Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.944186 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-sasl-users\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.944361 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-sasl-config\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.946142 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.947854 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.948188 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.948271 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.966522 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tmm5\" (UniqueName: \"kubernetes.io/projected/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-kube-api-access-9tmm5\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.075749 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.569414 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22703395-ebd0-469b-aec4-b703ed4a8e65" path="/var/lib/kubelet/pods/22703395-ebd0-469b-aec4-b703ed4a8e65/volumes" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.570696 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-7pdq9"] Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.914028 5108 generic.go:358] "Generic (PLEG): container finished" podID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerID="795679bf9de717c5d31e446059babc25599991e8481de54f0dc1309c13af937d" exitCode=0 Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.914141 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerDied","Data":"795679bf9de717c5d31e446059babc25599991e8481de54f0dc1309c13af937d"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.914403 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"a7f95cff8111463a99c892cfb8cbabb5d9662714b7cb1113a5523aff294c5d87"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.914430 5108 scope.go:117] "RemoveContainer" containerID="2f2e9df533cb87396f8d3fd0d1a26fadb3bf2cae351b8b03ee4f3bd210e16a31" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.919069 5108 generic.go:358] "Generic (PLEG): container finished" podID="7a85d430-d592-4eee-99f4-89aea943a820" containerID="5b966889339391ad5d1c58ffdd96cca6c66b2241f74216278cd1c8d7a429186f" exitCode=0 Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.919160 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" event={"ID":"7a85d430-d592-4eee-99f4-89aea943a820","Type":"ContainerDied","Data":"5b966889339391ad5d1c58ffdd96cca6c66b2241f74216278cd1c8d7a429186f"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.919944 5108 scope.go:117] "RemoveContainer" containerID="5b966889339391ad5d1c58ffdd96cca6c66b2241f74216278cd1c8d7a429186f" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.923714 5108 generic.go:358] "Generic (PLEG): container finished" podID="9fccb2ea-b40e-4375-81bf-1bedc36fd526" containerID="50f66ae7b1198518f36c4f7c0b2ac204ea13d743efd2a28463532e9ab85cdc6b" exitCode=0 Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.923806 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" event={"ID":"9fccb2ea-b40e-4375-81bf-1bedc36fd526","Type":"ContainerDied","Data":"50f66ae7b1198518f36c4f7c0b2ac204ea13d743efd2a28463532e9ab85cdc6b"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.924305 5108 scope.go:117] "RemoveContainer" containerID="50f66ae7b1198518f36c4f7c0b2ac204ea13d743efd2a28463532e9ab85cdc6b" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.938265 5108 generic.go:358] "Generic (PLEG): container finished" podID="effd2c87-a358-47ac-869d-e9b26a40cb11" containerID="e4ed96cabaa8a92966a36fb9578a8f60e5e271a49ba4cf3ce82a49924816b94d" exitCode=0 Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.938344 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" event={"ID":"effd2c87-a358-47ac-869d-e9b26a40cb11","Type":"ContainerDied","Data":"e4ed96cabaa8a92966a36fb9578a8f60e5e271a49ba4cf3ce82a49924816b94d"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.938929 5108 scope.go:117] "RemoveContainer" containerID="e4ed96cabaa8a92966a36fb9578a8f60e5e271a49ba4cf3ce82a49924816b94d" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.944658 5108 generic.go:358] "Generic (PLEG): container finished" podID="095466f0-3dfb-4daf-809c-188de8da2ee9" containerID="796732d5918e79323e00815611ebf68a7c6940165d8970726370476dcd69dadd" exitCode=0 Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.944813 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" event={"ID":"095466f0-3dfb-4daf-809c-188de8da2ee9","Type":"ContainerDied","Data":"796732d5918e79323e00815611ebf68a7c6940165d8970726370476dcd69dadd"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.945662 5108 scope.go:117] "RemoveContainer" containerID="796732d5918e79323e00815611ebf68a7c6940165d8970726370476dcd69dadd" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.955062 5108 generic.go:358] "Generic (PLEG): container finished" podID="69974414-b4a3-48b4-ad93-b7b855ee08ea" containerID="219d6aaa4b6711ff4073da8170cdf099f0a8e4eb465af71ad32107c2ea1fb1b7" exitCode=0 Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.955186 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" event={"ID":"69974414-b4a3-48b4-ad93-b7b855ee08ea","Type":"ContainerDied","Data":"219d6aaa4b6711ff4073da8170cdf099f0a8e4eb465af71ad32107c2ea1fb1b7"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.956913 5108 scope.go:117] "RemoveContainer" containerID="219d6aaa4b6711ff4073da8170cdf099f0a8e4eb465af71ad32107c2ea1fb1b7" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.973555 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" event={"ID":"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd","Type":"ContainerStarted","Data":"2fbf8a71649f3a98c7d31891c2d8a95b4ce92e749aed43e550a9af1c05e5939b"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.973727 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" event={"ID":"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd","Type":"ContainerStarted","Data":"569e8f5708b393ee6a89cd7a77a57b8b62f08e7fb9b85bfe4aeb1881d6f9de98"} Feb 02 00:25:22 crc kubenswrapper[5108]: I0202 00:25:22.095531 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" podStartSLOduration=2.095503367 podStartE2EDuration="2.095503367s" podCreationTimestamp="2026-02-02 00:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:25:22.063189869 +0000 UTC m=+921.338686799" watchObservedRunningTime="2026-02-02 00:25:22.095503367 +0000 UTC m=+921.371000297" Feb 02 00:25:22 crc kubenswrapper[5108]: I0202 00:25:22.984765 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" event={"ID":"7a85d430-d592-4eee-99f4-89aea943a820","Type":"ContainerStarted","Data":"88dae75f23c970f099452f64517336194a281bee143e83c5871bc2c78ce44fd9"} Feb 02 00:25:22 crc kubenswrapper[5108]: I0202 00:25:22.993640 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" event={"ID":"9fccb2ea-b40e-4375-81bf-1bedc36fd526","Type":"ContainerStarted","Data":"0a19fd50e5b7e00a237f48e62454db3189305877c11a32a4ee888dcbc479a9d0"} Feb 02 00:25:22 crc kubenswrapper[5108]: I0202 00:25:22.997666 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" event={"ID":"effd2c87-a358-47ac-869d-e9b26a40cb11","Type":"ContainerStarted","Data":"144835ad071ad274c64988b578090a7870a57722a90c5a304eb6499ffa673778"} Feb 02 00:25:23 crc kubenswrapper[5108]: I0202 00:25:23.002042 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" event={"ID":"095466f0-3dfb-4daf-809c-188de8da2ee9","Type":"ContainerStarted","Data":"b132791c697d0ba43a50dc4d3ea5279d0863a395830c3224c7af935ff6799a4f"} Feb 02 00:25:23 crc kubenswrapper[5108]: I0202 00:25:23.005313 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" event={"ID":"69974414-b4a3-48b4-ad93-b7b855ee08ea","Type":"ContainerStarted","Data":"3474e2473ec982f6042242fdb3e83622e3c71c770c4f847446b5b9779b2f737f"} Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.422989 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.555970 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.556172 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.559350 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.559607 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.582887 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/318a1230-b836-4db1-b9b7-8da7017365ad-qdr-test-config\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.583218 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgc7g\" (UniqueName: \"kubernetes.io/projected/318a1230-b836-4db1-b9b7-8da7017365ad-kube-api-access-jgc7g\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.583384 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/318a1230-b836-4db1-b9b7-8da7017365ad-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.685083 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/318a1230-b836-4db1-b9b7-8da7017365ad-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.685151 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/318a1230-b836-4db1-b9b7-8da7017365ad-qdr-test-config\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.685168 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jgc7g\" (UniqueName: \"kubernetes.io/projected/318a1230-b836-4db1-b9b7-8da7017365ad-kube-api-access-jgc7g\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.686064 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/318a1230-b836-4db1-b9b7-8da7017365ad-qdr-test-config\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.692969 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/318a1230-b836-4db1-b9b7-8da7017365ad-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.702756 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgc7g\" (UniqueName: \"kubernetes.io/projected/318a1230-b836-4db1-b9b7-8da7017365ad-kube-api-access-jgc7g\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.880384 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 02 00:25:30 crc kubenswrapper[5108]: I0202 00:25:30.341584 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 02 00:25:31 crc kubenswrapper[5108]: I0202 00:25:31.075487 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"318a1230-b836-4db1-b9b7-8da7017365ad","Type":"ContainerStarted","Data":"10bf22125dfa147cc049cdd45a10169258bae1cc8e6679ce5417fc58f2de2a9d"} Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.143175 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"318a1230-b836-4db1-b9b7-8da7017365ad","Type":"ContainerStarted","Data":"eb5424d4b0be8225d9294450e8fd43a85057550b1eb85fa6e49dc4cd5a7fde77"} Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.162605 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.00773301 podStartE2EDuration="10.162588184s" podCreationTimestamp="2026-02-02 00:25:29 +0000 UTC" firstStartedPulling="2026-02-02 00:25:30.348315861 +0000 UTC m=+929.623812791" lastFinishedPulling="2026-02-02 00:25:38.503171045 +0000 UTC m=+937.778667965" observedRunningTime="2026-02-02 00:25:39.157735354 +0000 UTC m=+938.433232314" watchObservedRunningTime="2026-02-02 00:25:39.162588184 +0000 UTC m=+938.438085114" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.497872 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8jkkf"] Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.502982 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.504923 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.505262 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8jkkf"] Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.505437 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.505449 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.506537 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.506585 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.508800 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.648963 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-sensubility-config\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.649008 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.649037 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.649147 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-healthcheck-log\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.649320 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.649396 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-config\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.649483 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7cg\" (UniqueName: \"kubernetes.io/projected/528509a5-e39b-4132-a319-38a57ed61f15-kube-api-access-7n7cg\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.750761 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.750824 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-config\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.750869 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7cg\" (UniqueName: \"kubernetes.io/projected/528509a5-e39b-4132-a319-38a57ed61f15-kube-api-access-7n7cg\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.750942 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-sensubility-config\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.751150 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.751257 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.751345 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-healthcheck-log\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.752219 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-config\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.752274 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-healthcheck-log\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.752360 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.752559 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.752874 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.752948 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-sensubility-config\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.775356 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7cg\" (UniqueName: \"kubernetes.io/projected/528509a5-e39b-4132-a319-38a57ed61f15-kube-api-access-7n7cg\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.821908 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.935343 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.951048 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.969636 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 02 00:25:40 crc kubenswrapper[5108]: I0202 00:25:40.057221 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pv8\" (UniqueName: \"kubernetes.io/projected/e6916909-03ba-493b-9e93-11005e24910d-kube-api-access-c8pv8\") pod \"curl\" (UID: \"e6916909-03ba-493b-9e93-11005e24910d\") " pod="service-telemetry/curl" Feb 02 00:25:40 crc kubenswrapper[5108]: I0202 00:25:40.158397 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pv8\" (UniqueName: \"kubernetes.io/projected/e6916909-03ba-493b-9e93-11005e24910d-kube-api-access-c8pv8\") pod \"curl\" (UID: \"e6916909-03ba-493b-9e93-11005e24910d\") " pod="service-telemetry/curl" Feb 02 00:25:40 crc kubenswrapper[5108]: I0202 00:25:40.177769 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pv8\" (UniqueName: \"kubernetes.io/projected/e6916909-03ba-493b-9e93-11005e24910d-kube-api-access-c8pv8\") pod \"curl\" (UID: \"e6916909-03ba-493b-9e93-11005e24910d\") " pod="service-telemetry/curl" Feb 02 00:25:40 crc kubenswrapper[5108]: I0202 00:25:40.267012 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8jkkf"] Feb 02 00:25:40 crc kubenswrapper[5108]: W0202 00:25:40.278590 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod528509a5_e39b_4132_a319_38a57ed61f15.slice/crio-e9a3994b962d5310b37ba80408bb63000b7a989568f1601bac6e3e8d1c1d46e6 WatchSource:0}: Error finding container e9a3994b962d5310b37ba80408bb63000b7a989568f1601bac6e3e8d1c1d46e6: Status 404 returned error can't find the container with id e9a3994b962d5310b37ba80408bb63000b7a989568f1601bac6e3e8d1c1d46e6 Feb 02 00:25:40 crc kubenswrapper[5108]: I0202 00:25:40.279793 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 02 00:25:40 crc kubenswrapper[5108]: I0202 00:25:40.528371 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 02 00:25:40 crc kubenswrapper[5108]: W0202 00:25:40.534691 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6916909_03ba_493b_9e93_11005e24910d.slice/crio-54e1cd21e1233afc2853cb73bd4d421babb8643b3e8bf5827cf38baaf6eb5981 WatchSource:0}: Error finding container 54e1cd21e1233afc2853cb73bd4d421babb8643b3e8bf5827cf38baaf6eb5981: Status 404 returned error can't find the container with id 54e1cd21e1233afc2853cb73bd4d421babb8643b3e8bf5827cf38baaf6eb5981 Feb 02 00:25:41 crc kubenswrapper[5108]: I0202 00:25:41.157327 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e6916909-03ba-493b-9e93-11005e24910d","Type":"ContainerStarted","Data":"54e1cd21e1233afc2853cb73bd4d421babb8643b3e8bf5827cf38baaf6eb5981"} Feb 02 00:25:41 crc kubenswrapper[5108]: I0202 00:25:41.158816 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" event={"ID":"528509a5-e39b-4132-a319-38a57ed61f15","Type":"ContainerStarted","Data":"e9a3994b962d5310b37ba80408bb63000b7a989568f1601bac6e3e8d1c1d46e6"} Feb 02 00:25:42 crc kubenswrapper[5108]: I0202 00:25:42.168906 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e6916909-03ba-493b-9e93-11005e24910d","Type":"ContainerStarted","Data":"1300001f353f4683812d97f9e858e85354c55cd2a9c3211149a64decf02392f1"} Feb 02 00:25:42 crc kubenswrapper[5108]: I0202 00:25:42.185374 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/curl" podStartSLOduration=1.7912723910000001 podStartE2EDuration="3.185352739s" podCreationTimestamp="2026-02-02 00:25:39 +0000 UTC" firstStartedPulling="2026-02-02 00:25:40.537058317 +0000 UTC m=+939.812555247" lastFinishedPulling="2026-02-02 00:25:41.931138665 +0000 UTC m=+941.206635595" observedRunningTime="2026-02-02 00:25:42.179181163 +0000 UTC m=+941.454678093" watchObservedRunningTime="2026-02-02 00:25:42.185352739 +0000 UTC m=+941.460849669" Feb 02 00:25:43 crc kubenswrapper[5108]: I0202 00:25:43.178251 5108 generic.go:358] "Generic (PLEG): container finished" podID="e6916909-03ba-493b-9e93-11005e24910d" containerID="1300001f353f4683812d97f9e858e85354c55cd2a9c3211149a64decf02392f1" exitCode=0 Feb 02 00:25:43 crc kubenswrapper[5108]: I0202 00:25:43.178461 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e6916909-03ba-493b-9e93-11005e24910d","Type":"ContainerDied","Data":"1300001f353f4683812d97f9e858e85354c55cd2a9c3211149a64decf02392f1"} Feb 02 00:25:47 crc kubenswrapper[5108]: I0202 00:25:47.918549 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.080471 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8pv8\" (UniqueName: \"kubernetes.io/projected/e6916909-03ba-493b-9e93-11005e24910d-kube-api-access-c8pv8\") pod \"e6916909-03ba-493b-9e93-11005e24910d\" (UID: \"e6916909-03ba-493b-9e93-11005e24910d\") " Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.103733 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6916909-03ba-493b-9e93-11005e24910d-kube-api-access-c8pv8" (OuterVolumeSpecName: "kube-api-access-c8pv8") pod "e6916909-03ba-493b-9e93-11005e24910d" (UID: "e6916909-03ba-493b-9e93-11005e24910d"). InnerVolumeSpecName "kube-api-access-c8pv8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.160811 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_e6916909-03ba-493b-9e93-11005e24910d/curl/0.log" Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.184368 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c8pv8\" (UniqueName: \"kubernetes.io/projected/e6916909-03ba-493b-9e93-11005e24910d-kube-api-access-c8pv8\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.231379 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.231400 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e6916909-03ba-493b-9e93-11005e24910d","Type":"ContainerDied","Data":"54e1cd21e1233afc2853cb73bd4d421babb8643b3e8bf5827cf38baaf6eb5981"} Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.231431 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e1cd21e1233afc2853cb73bd4d421babb8643b3e8bf5827cf38baaf6eb5981" Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.513042 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-sfrh8_4431ddda-6bd1-43de-8d6e-c5829580e15e/prometheus-webhook-snmp/0.log" Feb 02 00:25:50 crc kubenswrapper[5108]: I0202 00:25:50.252882 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" event={"ID":"528509a5-e39b-4132-a319-38a57ed61f15","Type":"ContainerStarted","Data":"894d4ba8a98d5d308b99513532b715504504ac25eee87c95ae71fa381ad4357b"} Feb 02 00:25:52 crc kubenswrapper[5108]: I0202 00:25:52.843782 5108 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 00:25:58 crc kubenswrapper[5108]: I0202 00:25:58.322642 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" event={"ID":"528509a5-e39b-4132-a319-38a57ed61f15","Type":"ContainerStarted","Data":"50d4a3199232e544a839082ea10f9fe20981e2285b6195620995522c629c7ff1"} Feb 02 00:25:58 crc kubenswrapper[5108]: I0202 00:25:58.348527 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" podStartSLOduration=2.384040331 podStartE2EDuration="19.348507445s" podCreationTimestamp="2026-02-02 00:25:39 +0000 UTC" firstStartedPulling="2026-02-02 00:25:40.282365601 +0000 UTC m=+939.557862571" lastFinishedPulling="2026-02-02 00:25:57.246832755 +0000 UTC m=+956.522329685" observedRunningTime="2026-02-02 00:25:58.342199536 +0000 UTC m=+957.617696476" watchObservedRunningTime="2026-02-02 00:25:58.348507445 +0000 UTC m=+957.624004375" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.151299 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499866-p4952"] Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.152290 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e6916909-03ba-493b-9e93-11005e24910d" containerName="curl" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.152306 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6916909-03ba-493b-9e93-11005e24910d" containerName="curl" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.152472 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e6916909-03ba-493b-9e93-11005e24910d" containerName="curl" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.157289 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.158668 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499866-p4952"] Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.163601 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.167494 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.172858 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.291014 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtgkg\" (UniqueName: \"kubernetes.io/projected/11e42247-cef9-4651-977b-c8bf4f2a1265-kube-api-access-mtgkg\") pod \"auto-csr-approver-29499866-p4952\" (UID: \"11e42247-cef9-4651-977b-c8bf4f2a1265\") " pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.393186 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mtgkg\" (UniqueName: \"kubernetes.io/projected/11e42247-cef9-4651-977b-c8bf4f2a1265-kube-api-access-mtgkg\") pod \"auto-csr-approver-29499866-p4952\" (UID: \"11e42247-cef9-4651-977b-c8bf4f2a1265\") " pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.414375 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtgkg\" (UniqueName: \"kubernetes.io/projected/11e42247-cef9-4651-977b-c8bf4f2a1265-kube-api-access-mtgkg\") pod \"auto-csr-approver-29499866-p4952\" (UID: \"11e42247-cef9-4651-977b-c8bf4f2a1265\") " pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.483529 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:00 crc kubenswrapper[5108]: W0202 00:26:00.820751 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11e42247_cef9_4651_977b_c8bf4f2a1265.slice/crio-eec72c71e30239d653e405a72d45be16a9c28843c0eed384970fcb45a96ee9f4 WatchSource:0}: Error finding container eec72c71e30239d653e405a72d45be16a9c28843c0eed384970fcb45a96ee9f4: Status 404 returned error can't find the container with id eec72c71e30239d653e405a72d45be16a9c28843c0eed384970fcb45a96ee9f4 Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.821474 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499866-p4952"] Feb 02 00:26:01 crc kubenswrapper[5108]: I0202 00:26:01.360494 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499866-p4952" event={"ID":"11e42247-cef9-4651-977b-c8bf4f2a1265","Type":"ContainerStarted","Data":"eec72c71e30239d653e405a72d45be16a9c28843c0eed384970fcb45a96ee9f4"} Feb 02 00:26:02 crc kubenswrapper[5108]: I0202 00:26:02.371069 5108 generic.go:358] "Generic (PLEG): container finished" podID="11e42247-cef9-4651-977b-c8bf4f2a1265" containerID="3d3f5106d313264d2e3037712f690e0c2856500894ef7b3799e7297fe1f37cee" exitCode=0 Feb 02 00:26:02 crc kubenswrapper[5108]: I0202 00:26:02.371149 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499866-p4952" event={"ID":"11e42247-cef9-4651-977b-c8bf4f2a1265","Type":"ContainerDied","Data":"3d3f5106d313264d2e3037712f690e0c2856500894ef7b3799e7297fe1f37cee"} Feb 02 00:26:03 crc kubenswrapper[5108]: I0202 00:26:03.639666 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:03 crc kubenswrapper[5108]: I0202 00:26:03.774861 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtgkg\" (UniqueName: \"kubernetes.io/projected/11e42247-cef9-4651-977b-c8bf4f2a1265-kube-api-access-mtgkg\") pod \"11e42247-cef9-4651-977b-c8bf4f2a1265\" (UID: \"11e42247-cef9-4651-977b-c8bf4f2a1265\") " Feb 02 00:26:03 crc kubenswrapper[5108]: I0202 00:26:03.787153 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e42247-cef9-4651-977b-c8bf4f2a1265-kube-api-access-mtgkg" (OuterVolumeSpecName: "kube-api-access-mtgkg") pod "11e42247-cef9-4651-977b-c8bf4f2a1265" (UID: "11e42247-cef9-4651-977b-c8bf4f2a1265"). InnerVolumeSpecName "kube-api-access-mtgkg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:26:03 crc kubenswrapper[5108]: I0202 00:26:03.876996 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mtgkg\" (UniqueName: \"kubernetes.io/projected/11e42247-cef9-4651-977b-c8bf4f2a1265-kube-api-access-mtgkg\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:04 crc kubenswrapper[5108]: I0202 00:26:04.388453 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:04 crc kubenswrapper[5108]: I0202 00:26:04.388568 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499866-p4952" event={"ID":"11e42247-cef9-4651-977b-c8bf4f2a1265","Type":"ContainerDied","Data":"eec72c71e30239d653e405a72d45be16a9c28843c0eed384970fcb45a96ee9f4"} Feb 02 00:26:04 crc kubenswrapper[5108]: I0202 00:26:04.388642 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec72c71e30239d653e405a72d45be16a9c28843c0eed384970fcb45a96ee9f4" Feb 02 00:26:04 crc kubenswrapper[5108]: I0202 00:26:04.707704 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29499860-n8hbz"] Feb 02 00:26:04 crc kubenswrapper[5108]: I0202 00:26:04.715081 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29499860-n8hbz"] Feb 02 00:26:05 crc kubenswrapper[5108]: I0202 00:26:05.567691 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1c738be-c891-4aa6-adfd-c1234cf80512" path="/var/lib/kubelet/pods/c1c738be-c891-4aa6-adfd-c1234cf80512/volumes" Feb 02 00:26:18 crc kubenswrapper[5108]: I0202 00:26:18.679196 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-sfrh8_4431ddda-6bd1-43de-8d6e-c5829580e15e/prometheus-webhook-snmp/0.log" Feb 02 00:26:23 crc kubenswrapper[5108]: I0202 00:26:23.565253 5108 generic.go:358] "Generic (PLEG): container finished" podID="528509a5-e39b-4132-a319-38a57ed61f15" containerID="894d4ba8a98d5d308b99513532b715504504ac25eee87c95ae71fa381ad4357b" exitCode=0 Feb 02 00:26:23 crc kubenswrapper[5108]: I0202 00:26:23.567084 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" event={"ID":"528509a5-e39b-4132-a319-38a57ed61f15","Type":"ContainerDied","Data":"894d4ba8a98d5d308b99513532b715504504ac25eee87c95ae71fa381ad4357b"} Feb 02 00:26:23 crc kubenswrapper[5108]: I0202 00:26:23.567790 5108 scope.go:117] "RemoveContainer" containerID="894d4ba8a98d5d308b99513532b715504504ac25eee87c95ae71fa381ad4357b" Feb 02 00:26:29 crc kubenswrapper[5108]: I0202 00:26:29.610260 5108 generic.go:358] "Generic (PLEG): container finished" podID="528509a5-e39b-4132-a319-38a57ed61f15" containerID="50d4a3199232e544a839082ea10f9fe20981e2285b6195620995522c629c7ff1" exitCode=0 Feb 02 00:26:29 crc kubenswrapper[5108]: I0202 00:26:29.610494 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" event={"ID":"528509a5-e39b-4132-a319-38a57ed61f15","Type":"ContainerDied","Data":"50d4a3199232e544a839082ea10f9fe20981e2285b6195620995522c629c7ff1"} Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.891824 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935391 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-healthcheck-log\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935532 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-config\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935566 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935609 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-entrypoint-script\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935629 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-sensubility-config\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935652 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n7cg\" (UniqueName: \"kubernetes.io/projected/528509a5-e39b-4132-a319-38a57ed61f15-kube-api-access-7n7cg\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935676 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-publisher\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.952385 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/528509a5-e39b-4132-a319-38a57ed61f15-kube-api-access-7n7cg" (OuterVolumeSpecName: "kube-api-access-7n7cg") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "kube-api-access-7n7cg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.958652 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.960637 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.961497 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.962770 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.966350 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:26:30 crc kubenswrapper[5108]: E0202 00:26:30.969174 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script podName:528509a5-e39b-4132-a319-38a57ed61f15 nodeName:}" failed. No retries permitted until 2026-02-02 00:26:31.469142755 +0000 UTC m=+990.744639685 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "collectd-entrypoint-script" (UniqueName: "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15") : error deleting /var/lib/kubelet/pods/528509a5-e39b-4132-a319-38a57ed61f15/volume-subpaths: remove /var/lib/kubelet/pods/528509a5-e39b-4132-a319-38a57ed61f15/volume-subpaths: no such file or directory Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.037747 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7n7cg\" (UniqueName: \"kubernetes.io/projected/528509a5-e39b-4132-a319-38a57ed61f15-kube-api-access-7n7cg\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.037781 5108 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.037790 5108 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.037798 5108 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.037806 5108 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.037818 5108 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.545931 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.546680 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.630806 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.631007 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" event={"ID":"528509a5-e39b-4132-a319-38a57ed61f15","Type":"ContainerDied","Data":"e9a3994b962d5310b37ba80408bb63000b7a989568f1601bac6e3e8d1c1d46e6"} Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.631069 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9a3994b962d5310b37ba80408bb63000b7a989568f1601bac6e3e8d1c1d46e6" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.653047 5108 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:33 crc kubenswrapper[5108]: I0202 00:26:33.116037 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-8jkkf_528509a5-e39b-4132-a319-38a57ed61f15/smoketest-collectd/0.log" Feb 02 00:26:33 crc kubenswrapper[5108]: I0202 00:26:33.451852 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-8jkkf_528509a5-e39b-4132-a319-38a57ed61f15/smoketest-ceilometer/0.log" Feb 02 00:26:33 crc kubenswrapper[5108]: I0202 00:26:33.793421 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-7pdq9_b6c4ad43-6e88-4492-ac18-0889f4f1fcdd/default-interconnect/0.log" Feb 02 00:26:34 crc kubenswrapper[5108]: I0202 00:26:34.120350 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-2fppp_effd2c87-a358-47ac-869d-e9b26a40cb11/bridge/1.log" Feb 02 00:26:34 crc kubenswrapper[5108]: I0202 00:26:34.489582 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-2fppp_effd2c87-a358-47ac-869d-e9b26a40cb11/sg-core/0.log" Feb 02 00:26:34 crc kubenswrapper[5108]: I0202 00:26:34.817294 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2_69974414-b4a3-48b4-ad93-b7b855ee08ea/bridge/1.log" Feb 02 00:26:35 crc kubenswrapper[5108]: I0202 00:26:35.153109 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2_69974414-b4a3-48b4-ad93-b7b855ee08ea/sg-core/0.log" Feb 02 00:26:35 crc kubenswrapper[5108]: I0202 00:26:35.477703 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k_9fccb2ea-b40e-4375-81bf-1bedc36fd526/bridge/1.log" Feb 02 00:26:35 crc kubenswrapper[5108]: I0202 00:26:35.829645 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k_9fccb2ea-b40e-4375-81bf-1bedc36fd526/sg-core/0.log" Feb 02 00:26:36 crc kubenswrapper[5108]: I0202 00:26:36.093406 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv_7a85d430-d592-4eee-99f4-89aea943a820/bridge/1.log" Feb 02 00:26:36 crc kubenswrapper[5108]: I0202 00:26:36.413447 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv_7a85d430-d592-4eee-99f4-89aea943a820/sg-core/0.log" Feb 02 00:26:36 crc kubenswrapper[5108]: I0202 00:26:36.729646 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4_095466f0-3dfb-4daf-809c-188de8da2ee9/bridge/1.log" Feb 02 00:26:37 crc kubenswrapper[5108]: I0202 00:26:37.067717 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4_095466f0-3dfb-4daf-809c-188de8da2ee9/sg-core/0.log" Feb 02 00:26:40 crc kubenswrapper[5108]: I0202 00:26:40.184881 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-5f7rf_02251320-d565-4211-98ff-a138f7924888/operator/0.log" Feb 02 00:26:40 crc kubenswrapper[5108]: I0202 00:26:40.499420 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_3180ec82-70eb-4837-9eed-a92e41e5e3fc/prometheus/0.log" Feb 02 00:26:40 crc kubenswrapper[5108]: I0202 00:26:40.916134 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_91781fe7-72ca-4748-8dcd-5d7d1c275472/elasticsearch/0.log" Feb 02 00:26:41 crc kubenswrapper[5108]: I0202 00:26:41.298575 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-sfrh8_4431ddda-6bd1-43de-8d6e-c5829580e15e/prometheus-webhook-snmp/0.log" Feb 02 00:26:41 crc kubenswrapper[5108]: I0202 00:26:41.650534 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_6d411794-541c-4416-bd08-cd4f26bc73cb/alertmanager/0.log" Feb 02 00:26:55 crc kubenswrapper[5108]: I0202 00:26:55.278702 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-794b5697c7-6gtwj_1c4a2dde-667e-45e3-8d53-9219bcfd2214/operator/0.log" Feb 02 00:26:59 crc kubenswrapper[5108]: I0202 00:26:59.043912 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-5f7rf_02251320-d565-4211-98ff-a138f7924888/operator/0.log" Feb 02 00:26:59 crc kubenswrapper[5108]: I0202 00:26:59.338223 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_318a1230-b836-4db1-b9b7-8da7017365ad/qdr/0.log" Feb 02 00:27:02 crc kubenswrapper[5108]: I0202 00:27:02.593929 5108 scope.go:117] "RemoveContainer" containerID="4889d1b8838ddcd25d685c454fac6b652c42c5979336992c7b26bb11fe672dbf" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.086296 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gfw45/must-gather-74b7l"] Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088220 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11e42247-cef9-4651-977b-c8bf4f2a1265" containerName="oc" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088278 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e42247-cef9-4651-977b-c8bf4f2a1265" containerName="oc" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088351 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="528509a5-e39b-4132-a319-38a57ed61f15" containerName="smoketest-ceilometer" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088363 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="528509a5-e39b-4132-a319-38a57ed61f15" containerName="smoketest-ceilometer" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088406 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="528509a5-e39b-4132-a319-38a57ed61f15" containerName="smoketest-collectd" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088421 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="528509a5-e39b-4132-a319-38a57ed61f15" containerName="smoketest-collectd" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088640 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="528509a5-e39b-4132-a319-38a57ed61f15" containerName="smoketest-ceilometer" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088672 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="528509a5-e39b-4132-a319-38a57ed61f15" containerName="smoketest-collectd" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088691 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="11e42247-cef9-4651-977b-c8bf4f2a1265" containerName="oc" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.102363 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.109163 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-gfw45\"/\"default-dockercfg-6rcjt\"" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.109434 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-gfw45\"/\"openshift-service-ca.crt\"" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.120723 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gfw45/must-gather-74b7l"] Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.122180 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-gfw45\"/\"kube-root-ca.crt\"" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.226319 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bgxj\" (UniqueName: \"kubernetes.io/projected/cec16d3f-7f30-4430-8908-77ebaf0a9f23-kube-api-access-9bgxj\") pod \"must-gather-74b7l\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.226399 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cec16d3f-7f30-4430-8908-77ebaf0a9f23-must-gather-output\") pod \"must-gather-74b7l\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.328306 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cec16d3f-7f30-4430-8908-77ebaf0a9f23-must-gather-output\") pod \"must-gather-74b7l\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.328770 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9bgxj\" (UniqueName: \"kubernetes.io/projected/cec16d3f-7f30-4430-8908-77ebaf0a9f23-kube-api-access-9bgxj\") pod \"must-gather-74b7l\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.328839 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cec16d3f-7f30-4430-8908-77ebaf0a9f23-must-gather-output\") pod \"must-gather-74b7l\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.364657 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bgxj\" (UniqueName: \"kubernetes.io/projected/cec16d3f-7f30-4430-8908-77ebaf0a9f23-kube-api-access-9bgxj\") pod \"must-gather-74b7l\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.434069 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.892087 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gfw45/must-gather-74b7l"] Feb 02 00:27:25 crc kubenswrapper[5108]: I0202 00:27:25.767613 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gfw45/must-gather-74b7l" event={"ID":"cec16d3f-7f30-4430-8908-77ebaf0a9f23","Type":"ContainerStarted","Data":"529a646df2f76a424219a9f5dc5ba8e321abac67ba88d1a3934022bfa5dc763c"} Feb 02 00:27:31 crc kubenswrapper[5108]: I0202 00:27:31.818541 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gfw45/must-gather-74b7l" event={"ID":"cec16d3f-7f30-4430-8908-77ebaf0a9f23","Type":"ContainerStarted","Data":"3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec"} Feb 02 00:27:31 crc kubenswrapper[5108]: I0202 00:27:31.819286 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gfw45/must-gather-74b7l" event={"ID":"cec16d3f-7f30-4430-8908-77ebaf0a9f23","Type":"ContainerStarted","Data":"e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82"} Feb 02 00:27:31 crc kubenswrapper[5108]: I0202 00:27:31.851047 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gfw45/must-gather-74b7l" podStartSLOduration=1.977410702 podStartE2EDuration="7.851014857s" podCreationTimestamp="2026-02-02 00:27:24 +0000 UTC" firstStartedPulling="2026-02-02 00:27:24.899976981 +0000 UTC m=+1044.175473941" lastFinishedPulling="2026-02-02 00:27:30.773581166 +0000 UTC m=+1050.049078096" observedRunningTime="2026-02-02 00:27:31.835618516 +0000 UTC m=+1051.111115506" watchObservedRunningTime="2026-02-02 00:27:31.851014857 +0000 UTC m=+1051.126511827" Feb 02 00:27:50 crc kubenswrapper[5108]: I0202 00:27:50.919217 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:27:50 crc kubenswrapper[5108]: I0202 00:27:50.919907 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.137516 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499868-69fht"] Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.150200 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499868-69fht"] Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.150361 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.163838 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.164183 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.164569 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.169001 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gzlt\" (UniqueName: \"kubernetes.io/projected/3a90f09a-fe0d-4118-b232-41084b3e197e-kube-api-access-8gzlt\") pod \"auto-csr-approver-29499868-69fht\" (UID: \"3a90f09a-fe0d-4118-b232-41084b3e197e\") " pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.269966 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8gzlt\" (UniqueName: \"kubernetes.io/projected/3a90f09a-fe0d-4118-b232-41084b3e197e-kube-api-access-8gzlt\") pod \"auto-csr-approver-29499868-69fht\" (UID: \"3a90f09a-fe0d-4118-b232-41084b3e197e\") " pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.289323 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gzlt\" (UniqueName: \"kubernetes.io/projected/3a90f09a-fe0d-4118-b232-41084b3e197e-kube-api-access-8gzlt\") pod \"auto-csr-approver-29499868-69fht\" (UID: \"3a90f09a-fe0d-4118-b232-41084b3e197e\") " pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.477407 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.725663 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499868-69fht"] Feb 02 00:28:01 crc kubenswrapper[5108]: I0202 00:28:01.052070 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499868-69fht" event={"ID":"3a90f09a-fe0d-4118-b232-41084b3e197e","Type":"ContainerStarted","Data":"60475a5b44c4ea031badde77088258caa3d7d57e4f01df1f8639d96f27b575b4"} Feb 02 00:28:02 crc kubenswrapper[5108]: I0202 00:28:02.057936 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499868-69fht" event={"ID":"3a90f09a-fe0d-4118-b232-41084b3e197e","Type":"ContainerStarted","Data":"57c873d43a8f95232b4d7911ca04e3bf56d61d09b31b1c7e45b22c63e97b03bc"} Feb 02 00:28:02 crc kubenswrapper[5108]: I0202 00:28:02.075464 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29499868-69fht" podStartSLOduration=1.195990117 podStartE2EDuration="2.075449221s" podCreationTimestamp="2026-02-02 00:28:00 +0000 UTC" firstStartedPulling="2026-02-02 00:28:00.720769177 +0000 UTC m=+1079.996266107" lastFinishedPulling="2026-02-02 00:28:01.600228261 +0000 UTC m=+1080.875725211" observedRunningTime="2026-02-02 00:28:02.069853981 +0000 UTC m=+1081.345350911" watchObservedRunningTime="2026-02-02 00:28:02.075449221 +0000 UTC m=+1081.350946151" Feb 02 00:28:03 crc kubenswrapper[5108]: I0202 00:28:03.067220 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a90f09a-fe0d-4118-b232-41084b3e197e" containerID="57c873d43a8f95232b4d7911ca04e3bf56d61d09b31b1c7e45b22c63e97b03bc" exitCode=0 Feb 02 00:28:03 crc kubenswrapper[5108]: I0202 00:28:03.067379 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499868-69fht" event={"ID":"3a90f09a-fe0d-4118-b232-41084b3e197e","Type":"ContainerDied","Data":"57c873d43a8f95232b4d7911ca04e3bf56d61d09b31b1c7e45b22c63e97b03bc"} Feb 02 00:28:04 crc kubenswrapper[5108]: I0202 00:28:04.354833 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:04 crc kubenswrapper[5108]: I0202 00:28:04.441967 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gzlt\" (UniqueName: \"kubernetes.io/projected/3a90f09a-fe0d-4118-b232-41084b3e197e-kube-api-access-8gzlt\") pod \"3a90f09a-fe0d-4118-b232-41084b3e197e\" (UID: \"3a90f09a-fe0d-4118-b232-41084b3e197e\") " Feb 02 00:28:04 crc kubenswrapper[5108]: I0202 00:28:04.448414 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a90f09a-fe0d-4118-b232-41084b3e197e-kube-api-access-8gzlt" (OuterVolumeSpecName: "kube-api-access-8gzlt") pod "3a90f09a-fe0d-4118-b232-41084b3e197e" (UID: "3a90f09a-fe0d-4118-b232-41084b3e197e"). InnerVolumeSpecName "kube-api-access-8gzlt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:28:04 crc kubenswrapper[5108]: I0202 00:28:04.544519 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8gzlt\" (UniqueName: \"kubernetes.io/projected/3a90f09a-fe0d-4118-b232-41084b3e197e-kube-api-access-8gzlt\") on node \"crc\" DevicePath \"\"" Feb 02 00:28:04 crc kubenswrapper[5108]: I0202 00:28:04.639828 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29499862-nmjl8"] Feb 02 00:28:04 crc kubenswrapper[5108]: I0202 00:28:04.644705 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29499862-nmjl8"] Feb 02 00:28:05 crc kubenswrapper[5108]: I0202 00:28:05.085864 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:05 crc kubenswrapper[5108]: I0202 00:28:05.085887 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499868-69fht" event={"ID":"3a90f09a-fe0d-4118-b232-41084b3e197e","Type":"ContainerDied","Data":"60475a5b44c4ea031badde77088258caa3d7d57e4f01df1f8639d96f27b575b4"} Feb 02 00:28:05 crc kubenswrapper[5108]: I0202 00:28:05.086271 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60475a5b44c4ea031badde77088258caa3d7d57e4f01df1f8639d96f27b575b4" Feb 02 00:28:05 crc kubenswrapper[5108]: I0202 00:28:05.566026 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e35e90a5-9be9-4d25-a87f-80c879fadbdb" path="/var/lib/kubelet/pods/e35e90a5-9be9-4d25-a87f-80c879fadbdb/volumes" Feb 02 00:28:16 crc kubenswrapper[5108]: I0202 00:28:16.068556 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-qmhlw_00c9b96f-70c1-47b2-ab2f-570c9911ecaf/control-plane-machine-set-operator/0.log" Feb 02 00:28:16 crc kubenswrapper[5108]: I0202 00:28:16.177486 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-q88tw_688cb527-1d6f-4e22-9b14-4718201c8343/kube-rbac-proxy/0.log" Feb 02 00:28:16 crc kubenswrapper[5108]: I0202 00:28:16.249093 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-q88tw_688cb527-1d6f-4e22-9b14-4718201c8343/machine-api-operator/0.log" Feb 02 00:28:20 crc kubenswrapper[5108]: I0202 00:28:20.919315 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:28:20 crc kubenswrapper[5108]: I0202 00:28:20.919902 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:28:29 crc kubenswrapper[5108]: I0202 00:28:29.158855 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-z8j4s_f0e17311-6020-462f-9ab7-8db9a5b4fd53/cert-manager-controller/0.log" Feb 02 00:28:29 crc kubenswrapper[5108]: I0202 00:28:29.260611 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-gwlkp_9c526e59-9f54-4c07-9df7-9c254286c8b2/cert-manager-cainjector/0.log" Feb 02 00:28:29 crc kubenswrapper[5108]: I0202 00:28:29.363128 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-md5xl_36067e0f-9235-409f-83d9-125165d03451/cert-manager-webhook/0.log" Feb 02 00:28:44 crc kubenswrapper[5108]: I0202 00:28:44.094147 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-qx2r6_3cae4b55-dd8b-41da-85fd-e3a48cd48a84/prometheus-operator/0.log" Feb 02 00:28:44 crc kubenswrapper[5108]: I0202 00:28:44.242303 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld_7b30b62b-4640-4186-8cec-9a4bce652c54/prometheus-operator-admission-webhook/0.log" Feb 02 00:28:44 crc kubenswrapper[5108]: I0202 00:28:44.257624 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8_ea610d63-cdca-43f6-ae36-1021a5cfb158/prometheus-operator-admission-webhook/0.log" Feb 02 00:28:44 crc kubenswrapper[5108]: I0202 00:28:44.425349 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-tdjm6_6b7e0bd1-72e0-4772-a2cf-8287051d3acd/operator/0.log" Feb 02 00:28:44 crc kubenswrapper[5108]: I0202 00:28:44.464312 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-twmfp_600911fd-7824-48ed-a826-60768dce689a/perses-operator/0.log" Feb 02 00:28:50 crc kubenswrapper[5108]: I0202 00:28:50.919727 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:28:50 crc kubenswrapper[5108]: I0202 00:28:50.920474 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:28:50 crc kubenswrapper[5108]: I0202 00:28:50.920564 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:28:50 crc kubenswrapper[5108]: I0202 00:28:50.921767 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a7f95cff8111463a99c892cfb8cbabb5d9662714b7cb1113a5523aff294c5d87"} pod="openshift-machine-config-operator/machine-config-daemon-d74m7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 00:28:50 crc kubenswrapper[5108]: I0202 00:28:50.921899 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" containerID="cri-o://a7f95cff8111463a99c892cfb8cbabb5d9662714b7cb1113a5523aff294c5d87" gracePeriod=600 Feb 02 00:28:51 crc kubenswrapper[5108]: I0202 00:28:51.425987 5108 generic.go:358] "Generic (PLEG): container finished" podID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerID="a7f95cff8111463a99c892cfb8cbabb5d9662714b7cb1113a5523aff294c5d87" exitCode=0 Feb 02 00:28:51 crc kubenswrapper[5108]: I0202 00:28:51.426066 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerDied","Data":"a7f95cff8111463a99c892cfb8cbabb5d9662714b7cb1113a5523aff294c5d87"} Feb 02 00:28:51 crc kubenswrapper[5108]: I0202 00:28:51.426421 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"194e3dbd97196d3de0be6ef1e30fef5712a8fc8c99966801283412ea58e86fdf"} Feb 02 00:28:51 crc kubenswrapper[5108]: I0202 00:28:51.426441 5108 scope.go:117] "RemoveContainer" containerID="795679bf9de717c5d31e446059babc25599991e8481de54f0dc1309c13af937d" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.009616 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/util/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.240310 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/util/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.241291 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/pull/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.241866 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/pull/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.404128 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/util/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.445896 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/pull/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.482385 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/extract/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.597102 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/util/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.750454 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/util/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.776823 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/pull/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.787742 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/pull/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.944071 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/util/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.967560 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/extract/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.978360 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/pull/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.109671 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/util/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.246727 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/util/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.306789 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/pull/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.307145 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/pull/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.453797 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/util/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.462519 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/pull/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.490888 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/extract/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.632057 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/util/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.787853 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/pull/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.793116 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/pull/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.826168 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/util/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.988697 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/util/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.019950 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/extract/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.021168 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/pull/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.159868 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/extract-utilities/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.310393 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/extract-content/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.326144 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/extract-content/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.342272 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/extract-utilities/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.477971 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/extract-content/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.478107 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/extract-utilities/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.645000 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/registry-server/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.661679 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/extract-utilities/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.829459 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/extract-utilities/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.833064 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/extract-content/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.843960 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/extract-content/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.980384 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/extract-utilities/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.004155 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/extract-content/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.046883 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-t6j5g_e18aabab-6cfe-4b88-9efd-a44ecbcace87/marketplace-operator/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.199113 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/extract-utilities/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.367407 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/registry-server/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.448844 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/extract-content/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.455890 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/extract-content/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.462874 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/extract-utilities/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.637759 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/extract-content/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.666120 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/extract-utilities/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.738927 5108 scope.go:117] "RemoveContainer" containerID="ac142680678000a1c22ed75ac938d78969d68b4d54d50e573d123eec7fdc4975" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.848948 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/registry-server/0.log" Feb 02 00:29:15 crc kubenswrapper[5108]: I0202 00:29:15.771754 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld_7b30b62b-4640-4186-8cec-9a4bce652c54/prometheus-operator-admission-webhook/0.log" Feb 02 00:29:15 crc kubenswrapper[5108]: I0202 00:29:15.800890 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-qx2r6_3cae4b55-dd8b-41da-85fd-e3a48cd48a84/prometheus-operator/0.log" Feb 02 00:29:15 crc kubenswrapper[5108]: I0202 00:29:15.808687 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8_ea610d63-cdca-43f6-ae36-1021a5cfb158/prometheus-operator-admission-webhook/0.log" Feb 02 00:29:15 crc kubenswrapper[5108]: I0202 00:29:15.873163 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-tdjm6_6b7e0bd1-72e0-4772-a2cf-8287051d3acd/operator/0.log" Feb 02 00:29:15 crc kubenswrapper[5108]: I0202 00:29:15.943883 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-twmfp_600911fd-7824-48ed-a826-60768dce689a/perses-operator/0.log" Feb 02 00:29:56 crc kubenswrapper[5108]: I0202 00:29:56.005406 5108 generic.go:358] "Generic (PLEG): container finished" podID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerID="e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82" exitCode=0 Feb 02 00:29:56 crc kubenswrapper[5108]: I0202 00:29:56.005514 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gfw45/must-gather-74b7l" event={"ID":"cec16d3f-7f30-4430-8908-77ebaf0a9f23","Type":"ContainerDied","Data":"e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82"} Feb 02 00:29:56 crc kubenswrapper[5108]: I0202 00:29:56.006821 5108 scope.go:117] "RemoveContainer" containerID="e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82" Feb 02 00:29:56 crc kubenswrapper[5108]: I0202 00:29:56.233615 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gfw45_must-gather-74b7l_cec16d3f-7f30-4430-8908-77ebaf0a9f23/gather/0.log" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.178390 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499870-ctgvw"] Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.180152 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a90f09a-fe0d-4118-b232-41084b3e197e" containerName="oc" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.180178 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a90f09a-fe0d-4118-b232-41084b3e197e" containerName="oc" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.180514 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a90f09a-fe0d-4118-b232-41084b3e197e" containerName="oc" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.186057 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.189556 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.190010 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.190398 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z"] Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.191728 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.200483 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499870-ctgvw"] Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.200703 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.202784 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.208645 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.218630 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z"] Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.349100 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-config-volume\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.349399 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5xc8\" (UniqueName: \"kubernetes.io/projected/b68f73b5-5a31-4952-b8ff-9a40c538dbb5-kube-api-access-v5xc8\") pod \"auto-csr-approver-29499870-ctgvw\" (UID: \"b68f73b5-5a31-4952-b8ff-9a40c538dbb5\") " pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.349498 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-secret-volume\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.349634 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn8fr\" (UniqueName: \"kubernetes.io/projected/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-kube-api-access-tn8fr\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.451627 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-config-volume\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.451732 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5xc8\" (UniqueName: \"kubernetes.io/projected/b68f73b5-5a31-4952-b8ff-9a40c538dbb5-kube-api-access-v5xc8\") pod \"auto-csr-approver-29499870-ctgvw\" (UID: \"b68f73b5-5a31-4952-b8ff-9a40c538dbb5\") " pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.451757 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-secret-volume\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.451778 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tn8fr\" (UniqueName: \"kubernetes.io/projected/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-kube-api-access-tn8fr\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.453223 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-config-volume\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.465918 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-secret-volume\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.471842 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn8fr\" (UniqueName: \"kubernetes.io/projected/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-kube-api-access-tn8fr\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.473972 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5xc8\" (UniqueName: \"kubernetes.io/projected/b68f73b5-5a31-4952-b8ff-9a40c538dbb5-kube-api-access-v5xc8\") pod \"auto-csr-approver-29499870-ctgvw\" (UID: \"b68f73b5-5a31-4952-b8ff-9a40c538dbb5\") " pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.533707 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.543192 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.784070 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499870-ctgvw"] Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.840922 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z"] Feb 02 00:30:00 crc kubenswrapper[5108]: W0202 00:30:00.844264 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c3b7760_ff06_45a3_9609_e0ff773cc0f9.slice/crio-87d734ef0d66b16fe1a29a09a0669c45be62e20d94a396f8a49126e61bfbeb12 WatchSource:0}: Error finding container 87d734ef0d66b16fe1a29a09a0669c45be62e20d94a396f8a49126e61bfbeb12: Status 404 returned error can't find the container with id 87d734ef0d66b16fe1a29a09a0669c45be62e20d94a396f8a49126e61bfbeb12 Feb 02 00:30:01 crc kubenswrapper[5108]: I0202 00:30:01.048524 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" event={"ID":"8c3b7760-ff06-45a3-9609-e0ff773cc0f9","Type":"ContainerStarted","Data":"0a5c3b29e3c5c29bb4783455b6db7b9f3d466624deee2b1a022cc0618ce7d5e5"} Feb 02 00:30:01 crc kubenswrapper[5108]: I0202 00:30:01.048866 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" event={"ID":"8c3b7760-ff06-45a3-9609-e0ff773cc0f9","Type":"ContainerStarted","Data":"87d734ef0d66b16fe1a29a09a0669c45be62e20d94a396f8a49126e61bfbeb12"} Feb 02 00:30:01 crc kubenswrapper[5108]: I0202 00:30:01.049567 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" event={"ID":"b68f73b5-5a31-4952-b8ff-9a40c538dbb5","Type":"ContainerStarted","Data":"989ca1b15394eea8e5d33c3bbea2a3255c1634bd971ebc13b1468521068b2528"} Feb 02 00:30:01 crc kubenswrapper[5108]: I0202 00:30:01.595103 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" podStartSLOduration=1.5950842280000002 podStartE2EDuration="1.595084228s" podCreationTimestamp="2026-02-02 00:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:30:01.065420175 +0000 UTC m=+1200.340917125" watchObservedRunningTime="2026-02-02 00:30:01.595084228 +0000 UTC m=+1200.870581168" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.063383 5108 generic.go:358] "Generic (PLEG): container finished" podID="8c3b7760-ff06-45a3-9609-e0ff773cc0f9" containerID="0a5c3b29e3c5c29bb4783455b6db7b9f3d466624deee2b1a022cc0618ce7d5e5" exitCode=0 Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.063662 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" event={"ID":"8c3b7760-ff06-45a3-9609-e0ff773cc0f9","Type":"ContainerDied","Data":"0a5c3b29e3c5c29bb4783455b6db7b9f3d466624deee2b1a022cc0618ce7d5e5"} Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.524581 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gfw45/must-gather-74b7l"] Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.525129 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-gfw45/must-gather-74b7l" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="copy" containerID="cri-o://3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec" gracePeriod=2 Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.527343 5108 status_manager.go:895] "Failed to get status for pod" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" pod="openshift-must-gather-gfw45/must-gather-74b7l" err="pods \"must-gather-74b7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gfw45\": no relationship found between node 'crc' and this object" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.533119 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gfw45/must-gather-74b7l"] Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.544539 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q22wv_24f8cedc-9b82-4ef7-a7db-4ce57803e0ce/kube-multus/0.log" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.563603 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q22wv_24f8cedc-9b82-4ef7-a7db-4ce57803e0ce/kube-multus/0.log" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.565194 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.570576 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.976996 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gfw45_must-gather-74b7l_cec16d3f-7f30-4430-8908-77ebaf0a9f23/copy/0.log" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.977937 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.979498 5108 status_manager.go:895] "Failed to get status for pod" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" pod="openshift-must-gather-gfw45/must-gather-74b7l" err="pods \"must-gather-74b7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gfw45\": no relationship found between node 'crc' and this object" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.002747 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cec16d3f-7f30-4430-8908-77ebaf0a9f23-must-gather-output\") pod \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.002871 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bgxj\" (UniqueName: \"kubernetes.io/projected/cec16d3f-7f30-4430-8908-77ebaf0a9f23-kube-api-access-9bgxj\") pod \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.011752 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec16d3f-7f30-4430-8908-77ebaf0a9f23-kube-api-access-9bgxj" (OuterVolumeSpecName: "kube-api-access-9bgxj") pod "cec16d3f-7f30-4430-8908-77ebaf0a9f23" (UID: "cec16d3f-7f30-4430-8908-77ebaf0a9f23"). InnerVolumeSpecName "kube-api-access-9bgxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.078755 5108 generic.go:358] "Generic (PLEG): container finished" podID="b68f73b5-5a31-4952-b8ff-9a40c538dbb5" containerID="0f5d023d74c13fc2161662e458cd8e9221f4acccd2576cc07870a375b10daf4b" exitCode=0 Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.079069 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" event={"ID":"b68f73b5-5a31-4952-b8ff-9a40c538dbb5","Type":"ContainerDied","Data":"0f5d023d74c13fc2161662e458cd8e9221f4acccd2576cc07870a375b10daf4b"} Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.080536 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gfw45_must-gather-74b7l_cec16d3f-7f30-4430-8908-77ebaf0a9f23/copy/0.log" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.080913 5108 generic.go:358] "Generic (PLEG): container finished" podID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerID="3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec" exitCode=143 Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.081084 5108 scope.go:117] "RemoveContainer" containerID="3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.082311 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cec16d3f-7f30-4430-8908-77ebaf0a9f23-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cec16d3f-7f30-4430-8908-77ebaf0a9f23" (UID: "cec16d3f-7f30-4430-8908-77ebaf0a9f23"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.082972 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.095996 5108 status_manager.go:895] "Failed to get status for pod" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" pod="openshift-must-gather-gfw45/must-gather-74b7l" err="pods \"must-gather-74b7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gfw45\": no relationship found between node 'crc' and this object" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.097595 5108 status_manager.go:895] "Failed to get status for pod" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" pod="openshift-must-gather-gfw45/must-gather-74b7l" err="pods \"must-gather-74b7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gfw45\": no relationship found between node 'crc' and this object" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.104713 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9bgxj\" (UniqueName: \"kubernetes.io/projected/cec16d3f-7f30-4430-8908-77ebaf0a9f23-kube-api-access-9bgxj\") on node \"crc\" DevicePath \"\"" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.104745 5108 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cec16d3f-7f30-4430-8908-77ebaf0a9f23-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.111994 5108 scope.go:117] "RemoveContainer" containerID="e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.115299 5108 status_manager.go:895] "Failed to get status for pod" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" pod="openshift-must-gather-gfw45/must-gather-74b7l" err="pods \"must-gather-74b7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gfw45\": no relationship found between node 'crc' and this object" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.215040 5108 scope.go:117] "RemoveContainer" containerID="3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec" Feb 02 00:30:03 crc kubenswrapper[5108]: E0202 00:30:03.215683 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec\": container with ID starting with 3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec not found: ID does not exist" containerID="3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.215742 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec"} err="failed to get container status \"3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec\": rpc error: code = NotFound desc = could not find container \"3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec\": container with ID starting with 3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec not found: ID does not exist" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.215775 5108 scope.go:117] "RemoveContainer" containerID="e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82" Feb 02 00:30:03 crc kubenswrapper[5108]: E0202 00:30:03.216057 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82\": container with ID starting with e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82 not found: ID does not exist" containerID="e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.216088 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82"} err="failed to get container status \"e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82\": rpc error: code = NotFound desc = could not find container \"e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82\": container with ID starting with e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82 not found: ID does not exist" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.284200 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.286260 5108 status_manager.go:895] "Failed to get status for pod" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" pod="openshift-must-gather-gfw45/must-gather-74b7l" err="pods \"must-gather-74b7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gfw45\": no relationship found between node 'crc' and this object" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.309013 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn8fr\" (UniqueName: \"kubernetes.io/projected/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-kube-api-access-tn8fr\") pod \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.309265 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-secret-volume\") pod \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.309334 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-config-volume\") pod \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.311412 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c3b7760-ff06-45a3-9609-e0ff773cc0f9" (UID: "8c3b7760-ff06-45a3-9609-e0ff773cc0f9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.316082 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8c3b7760-ff06-45a3-9609-e0ff773cc0f9" (UID: "8c3b7760-ff06-45a3-9609-e0ff773cc0f9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.322801 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-kube-api-access-tn8fr" (OuterVolumeSpecName: "kube-api-access-tn8fr") pod "8c3b7760-ff06-45a3-9609-e0ff773cc0f9" (UID: "8c3b7760-ff06-45a3-9609-e0ff773cc0f9"). InnerVolumeSpecName "kube-api-access-tn8fr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.410625 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.410659 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.410667 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tn8fr\" (UniqueName: \"kubernetes.io/projected/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-kube-api-access-tn8fr\") on node \"crc\" DevicePath \"\"" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.565148 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" path="/var/lib/kubelet/pods/cec16d3f-7f30-4430-8908-77ebaf0a9f23/volumes" Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.092473 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.092463 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" event={"ID":"8c3b7760-ff06-45a3-9609-e0ff773cc0f9","Type":"ContainerDied","Data":"87d734ef0d66b16fe1a29a09a0669c45be62e20d94a396f8a49126e61bfbeb12"} Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.092891 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87d734ef0d66b16fe1a29a09a0669c45be62e20d94a396f8a49126e61bfbeb12" Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.384610 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.424659 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5xc8\" (UniqueName: \"kubernetes.io/projected/b68f73b5-5a31-4952-b8ff-9a40c538dbb5-kube-api-access-v5xc8\") pod \"b68f73b5-5a31-4952-b8ff-9a40c538dbb5\" (UID: \"b68f73b5-5a31-4952-b8ff-9a40c538dbb5\") " Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.431186 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68f73b5-5a31-4952-b8ff-9a40c538dbb5-kube-api-access-v5xc8" (OuterVolumeSpecName: "kube-api-access-v5xc8") pod "b68f73b5-5a31-4952-b8ff-9a40c538dbb5" (UID: "b68f73b5-5a31-4952-b8ff-9a40c538dbb5"). InnerVolumeSpecName "kube-api-access-v5xc8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.526739 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v5xc8\" (UniqueName: \"kubernetes.io/projected/b68f73b5-5a31-4952-b8ff-9a40c538dbb5-kube-api-access-v5xc8\") on node \"crc\" DevicePath \"\"" Feb 02 00:30:05 crc kubenswrapper[5108]: I0202 00:30:05.105074 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:05 crc kubenswrapper[5108]: I0202 00:30:05.105085 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" event={"ID":"b68f73b5-5a31-4952-b8ff-9a40c538dbb5","Type":"ContainerDied","Data":"989ca1b15394eea8e5d33c3bbea2a3255c1634bd971ebc13b1468521068b2528"} Feb 02 00:30:05 crc kubenswrapper[5108]: I0202 00:30:05.105628 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="989ca1b15394eea8e5d33c3bbea2a3255c1634bd971ebc13b1468521068b2528" Feb 02 00:30:05 crc kubenswrapper[5108]: I0202 00:30:05.434574 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29499864-pnc7n"] Feb 02 00:30:05 crc kubenswrapper[5108]: I0202 00:30:05.450828 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29499864-pnc7n"] Feb 02 00:30:05 crc kubenswrapper[5108]: I0202 00:30:05.574781 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085299b1-a0db-40df-ab74-d8bf934d61bc" path="/var/lib/kubelet/pods/085299b1-a0db-40df-ab74-d8bf934d61bc/volumes" Feb 02 00:31:02 crc kubenswrapper[5108]: I0202 00:31:02.869375 5108 scope.go:117] "RemoveContainer" containerID="998e5f1fcc87712044852b3976957ba53e7f51bedc7d5c688980e4b72248f874" Feb 02 00:31:20 crc kubenswrapper[5108]: I0202 00:31:20.919383 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:31:20 crc kubenswrapper[5108]: I0202 00:31:20.920089 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:31:50 crc kubenswrapper[5108]: I0202 00:31:50.919746 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:31:50 crc kubenswrapper[5108]: I0202 00:31:50.920545 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.166528 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499872-zk7j8"] Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168815 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b68f73b5-5a31-4952-b8ff-9a40c538dbb5" containerName="oc" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168844 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68f73b5-5a31-4952-b8ff-9a40c538dbb5" containerName="oc" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168883 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="copy" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168896 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="copy" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168960 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="gather" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168975 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="gather" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168995 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c3b7760-ff06-45a3-9609-e0ff773cc0f9" containerName="collect-profiles" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.169022 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3b7760-ff06-45a3-9609-e0ff773cc0f9" containerName="collect-profiles" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.169259 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="gather" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.169278 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="8c3b7760-ff06-45a3-9609-e0ff773cc0f9" containerName="collect-profiles" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.169302 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="copy" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.169330 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="b68f73b5-5a31-4952-b8ff-9a40c538dbb5" containerName="oc" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.176783 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.187281 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499872-zk7j8"] Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.204941 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt2f7\" (UniqueName: \"kubernetes.io/projected/b4506d3f-997e-4dec-9101-f1ec1739a50f-kube-api-access-nt2f7\") pod \"auto-csr-approver-29499872-zk7j8\" (UID: \"b4506d3f-997e-4dec-9101-f1ec1739a50f\") " pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.219611 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.219665 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.219991 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.306762 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nt2f7\" (UniqueName: \"kubernetes.io/projected/b4506d3f-997e-4dec-9101-f1ec1739a50f-kube-api-access-nt2f7\") pod \"auto-csr-approver-29499872-zk7j8\" (UID: \"b4506d3f-997e-4dec-9101-f1ec1739a50f\") " pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.341779 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt2f7\" (UniqueName: \"kubernetes.io/projected/b4506d3f-997e-4dec-9101-f1ec1739a50f-kube-api-access-nt2f7\") pod \"auto-csr-approver-29499872-zk7j8\" (UID: \"b4506d3f-997e-4dec-9101-f1ec1739a50f\") " pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.543383 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.895879 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499872-zk7j8"] Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.899263 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 00:32:01 crc kubenswrapper[5108]: I0202 00:32:01.298293 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" event={"ID":"b4506d3f-997e-4dec-9101-f1ec1739a50f","Type":"ContainerStarted","Data":"d29de6f515db8ad61da3f51578d856cf4ac3ca0e6fa0e2f1d7692f04221cc376"} Feb 02 00:32:03 crc kubenswrapper[5108]: I0202 00:32:03.324214 5108 generic.go:358] "Generic (PLEG): container finished" podID="b4506d3f-997e-4dec-9101-f1ec1739a50f" containerID="35e1a3628fde542ef62f173467a4cb2b1959cb932bd354c8830c1dffb89265c0" exitCode=0 Feb 02 00:32:03 crc kubenswrapper[5108]: I0202 00:32:03.324860 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" event={"ID":"b4506d3f-997e-4dec-9101-f1ec1739a50f","Type":"ContainerDied","Data":"35e1a3628fde542ef62f173467a4cb2b1959cb932bd354c8830c1dffb89265c0"} Feb 02 00:32:04 crc kubenswrapper[5108]: I0202 00:32:04.703076 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:04 crc kubenswrapper[5108]: I0202 00:32:04.808089 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt2f7\" (UniqueName: \"kubernetes.io/projected/b4506d3f-997e-4dec-9101-f1ec1739a50f-kube-api-access-nt2f7\") pod \"b4506d3f-997e-4dec-9101-f1ec1739a50f\" (UID: \"b4506d3f-997e-4dec-9101-f1ec1739a50f\") " Feb 02 00:32:04 crc kubenswrapper[5108]: I0202 00:32:04.819011 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4506d3f-997e-4dec-9101-f1ec1739a50f-kube-api-access-nt2f7" (OuterVolumeSpecName: "kube-api-access-nt2f7") pod "b4506d3f-997e-4dec-9101-f1ec1739a50f" (UID: "b4506d3f-997e-4dec-9101-f1ec1739a50f"). InnerVolumeSpecName "kube-api-access-nt2f7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:32:04 crc kubenswrapper[5108]: I0202 00:32:04.911505 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nt2f7\" (UniqueName: \"kubernetes.io/projected/b4506d3f-997e-4dec-9101-f1ec1739a50f-kube-api-access-nt2f7\") on node \"crc\" DevicePath \"\"" Feb 02 00:32:05 crc kubenswrapper[5108]: I0202 00:32:05.387326 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" event={"ID":"b4506d3f-997e-4dec-9101-f1ec1739a50f","Type":"ContainerDied","Data":"d29de6f515db8ad61da3f51578d856cf4ac3ca0e6fa0e2f1d7692f04221cc376"} Feb 02 00:32:05 crc kubenswrapper[5108]: I0202 00:32:05.387375 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d29de6f515db8ad61da3f51578d856cf4ac3ca0e6fa0e2f1d7692f04221cc376" Feb 02 00:32:05 crc kubenswrapper[5108]: I0202 00:32:05.387463 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:05 crc kubenswrapper[5108]: I0202 00:32:05.792435 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29499866-p4952"] Feb 02 00:32:05 crc kubenswrapper[5108]: I0202 00:32:05.802496 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29499866-p4952"] Feb 02 00:32:07 crc kubenswrapper[5108]: I0202 00:32:07.565615 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e42247-cef9-4651-977b-c8bf4f2a1265" path="/var/lib/kubelet/pods/11e42247-cef9-4651-977b-c8bf4f2a1265/volumes" Feb 02 00:32:20 crc kubenswrapper[5108]: I0202 00:32:20.919874 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:32:20 crc kubenswrapper[5108]: I0202 00:32:20.921035 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:32:20 crc kubenswrapper[5108]: I0202 00:32:20.921140 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:32:20 crc kubenswrapper[5108]: I0202 00:32:20.922691 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"194e3dbd97196d3de0be6ef1e30fef5712a8fc8c99966801283412ea58e86fdf"} pod="openshift-machine-config-operator/machine-config-daemon-d74m7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 00:32:20 crc kubenswrapper[5108]: I0202 00:32:20.922800 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" containerID="cri-o://194e3dbd97196d3de0be6ef1e30fef5712a8fc8c99966801283412ea58e86fdf" gracePeriod=600 Feb 02 00:32:21 crc kubenswrapper[5108]: I0202 00:32:21.543073 5108 generic.go:358] "Generic (PLEG): container finished" podID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerID="194e3dbd97196d3de0be6ef1e30fef5712a8fc8c99966801283412ea58e86fdf" exitCode=0 Feb 02 00:32:21 crc kubenswrapper[5108]: I0202 00:32:21.543143 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerDied","Data":"194e3dbd97196d3de0be6ef1e30fef5712a8fc8c99966801283412ea58e86fdf"} Feb 02 00:32:21 crc kubenswrapper[5108]: I0202 00:32:21.543790 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"559704552cc5e72ad853827ae38d3ed9ab7634f1f7995e20fd99aa218e41b467"} Feb 02 00:32:21 crc kubenswrapper[5108]: I0202 00:32:21.543818 5108 scope.go:117] "RemoveContainer" containerID="a7f95cff8111463a99c892cfb8cbabb5d9662714b7cb1113a5523aff294c5d87" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137770302024452 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137770303017370 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137765125016521 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137765125015471 5ustar corecore